DEV Community

Cover image for Scraping Infinite Scroll Pages with a 'Load More' Button: A Step-by-Step Guide
Pragati Verma
Pragati Verma

Posted on

Scraping Infinite Scroll Pages with a 'Load More' Button: A Step-by-Step Guide

Are your scrapers stuck when trying to load data from dynamic web pages? Are you frustrated with infinite scrolls or those pesky "Load more" buttons?

You're not alone. Many websites today implement these designs to improve user experience—but they can be challenging for web scrapers.

This tutorial will guide you through a beginner-friendly walkthrough for scraping a demo page with a Load More button. Here’s what the target web page looks like:

Demo web page for scraping

By the end, you'll learn how to:

  • Set up Selenium for web scraping.
  • Automate the "Load more" button interaction.
  • Extract product data such as names, prices, and links.

Let's dive in!

Step 1: Prerequisites

Before diving in, ensure the following prerequisites:

  • Python Installed: Download and install the latest Python version from python.org, including pip during setup.
  • Basic Knowledge: Familiarity with web scraping concepts, Python programming, and working with libraries such as requests, BeautifulSoup, and Selenium.

Libraries Required:

  • Requests: For sending HTTP requests.
  • BeautifulSoup: For parsing the HTML content.
  • Selenium: For simulating user interactions like button clicks in a browser.

You can install these libraries using the following command in your terminal:

pip install requests beautifulsoup4 selenium
Enter fullscreen mode Exit fullscreen mode

Before using Selenium, you must install a web driver matching your browser. For this tutorial, we'll use Google Chrome and ChromeDriver. However, you can follow similar steps for other browsers like Firefox or Edge.

Install the Web Driver

  1. Check your browser version:
  2. Open Google Chrome and navigate to Help > About Google Chrome from the three-dot menu to find the Chrome version.

  3. Download ChromeDriver:

  4. Visit the ChromeDriver download page.

  5. Download the driver version that matches your Chrome version.

  6. Add ChromeDriver to your system PATH:
    Extract the downloaded file and place it in a directory like /usr/local/bin (Mac/Linux) or C:\Windows\System32 (Windows).

Verify Installation

Initialize a Python file scraper.py in your project directory and test that everything is set up correctly by running the following code snippet:

from selenium import webdriver
driver = webdriver.Chrome() # Ensure ChromeDriver is installed and in PATH
driver.get("https://www.scrapingcourse.com/button-click")
print(driver.title)
driver.quit()
Enter fullscreen mode Exit fullscreen mode

You can execute the above file code by running the following command on your terminal:

python scraper.py
Enter fullscreen mode Exit fullscreen mode

If the above code runs without errors, it will spin up a browser interface and open the demo page URL as shown below:

Demo Page in Selenium Browser Instance

Selenium will then extract the HTML and print the page title. You will see an output like this -

Load More Button Challenge to Learn Web Scraping - ScrapingCourse.com
Enter fullscreen mode Exit fullscreen mode

This verifies that Selenium is ready to use. With all requirements installed and ready to use, you can start accessing the demo page's content.

Step 2: Get Access to the Content

The first step is to fetch the page's initial content, which gives you a baseline snapshot of the page's HTML. This will help you verify connectivity and ensure a valid starting point for the scraping process.

You will retrieve the HTML content of the page URL by sending a GET request using the Requests library in Python. Here's the code:

import requests
# URL of the demo page with products
url = "https://www.scrapingcourse.com/button-click"
# Send a GET request to the URL
response = requests.get(url)
# Check if the request was successful
if response.status_code == 200:
    html_content = response.text
    print(html_content) # Optional: Preview the HTML
else:
    print(f"Failed to retrieve content: {response.status_code}")
Enter fullscreen mode Exit fullscreen mode

The above code will output the raw HTML containing the data for the first 12 products.

This quick preview of the HTML ensures that the request was successful and that you're working with valid data.

Step 3: Load More Products

To access the remaining products, you'll need to programmatically click the "Load more" button on the page until no more products are available. Since this interaction involves JavaScript, you will use Selenium to simulate the button click.

Before writing code, let’s inspect the page to locate:

  • The "Load more" button selector (load-more-btn).
  • The div holding the product details (product-item).

You'll get all the products by loading more products, giving you a larger dataset by running the following code:

from selenium import webdriver
from selenium.webdriver.common.by import By
import time
# Set up the WebDriver (make sure you have the appropriate driver installed, e.g., ChromeDriver)
driver = webdriver.Chrome()
# Open the page
driver.get("https://www.scrapingcourse.com/button-click")
# Loop to click the "Load More" button until there are no more products
while True:
    try:
        # Find the "Load more" button by its ID and click it
        load_more_button = driver.find_element(By.ID, "load-more-btn")
        load_more_button.click()
        # Wait for the content to load (adjust time as necessary)
        time.sleep(2)
    except Exception as e:
        # If no "Load More" button is found (end of products), break out of the loop
        print("No more products to load.")
        break
# Get the updated page content after all products are loaded
html_content = driver.page_source
# Close the browser window
driver.quit()
Enter fullscreen mode Exit fullscreen mode

This code opens the browser, navigates to the page, and interacts with the "Load more" button. The updated HTML, now containing more product data, is then extracted.

If you don’t want Selenium to open the browser every time you run this code, it also provides headless browser capabilities. A headless browser has all the functionalities of an actual web browser but no Graphical User Interface (GUI).

You can enable the headless mode for Chrome in Selenium by defining a ChromeOptions object and passing it to the WebDriver Chrome constructor like this:

from selenium import webdriver
from selenium.webdriver.common.by import By

import time

# instantiate a Chrome options object
options = webdriver.ChromeOptions()

# set the options to use Chrome in headless mode
options.add_argument("--headless=new")

# initialize an instance of the Chrome driver (browser) in headless mode
driver = webdriver.Chrome(options=options)

...
Enter fullscreen mode Exit fullscreen mode

When you run the above code, Selenium will launch a headless Chrome instance, so you’ll no longer see a Chrome window. This is ideal for production environments where you don’t want to waste resources on the GUI when running the scraping script on a server.

Now that the complete HTML content is retrieved extracting specific details about each product is time.

Step 4: Parse Product Information

In this step, you'll use BeautifulSoup to parse the HTML and identify product elements. Then, you'll extract key details for each product, such as the name, price, and links.

from bs4 import BeautifulSoup
# Parse the page content with BeautifulSoup
soup = BeautifulSoup(html_content, 'html.parser')
# Extract product details
products = []
# Find all product items in the grid
product_items = soup.find_all('div', class_='product-item')
for product in product_items:
    # Extract the product name
    name = product.find('span', class_='product-name').get_text(strip=True)

    # Extract the product price
    price = product.find('span', class_='product-price').get_text(strip=True)

    # Extract the product link
    link = product.find('a')['href']

    # Extract the image URL
    image_url = product.find('img')['src']

    # Create a dictionary with the product details
    products.append({
        'name': name,
        'price': price,
        'link': link,
        'image_url': image_url
})
# Print the extracted product details
for product in products[:2]:
    print(f"Name: {product['name']}")
    print(f"Price: {product['price']}")
    print(f"Link: {product['link']}")
    print(f"Image URL: {product['image_url']}")
    print('-' * 30)
Enter fullscreen mode Exit fullscreen mode

In the output, you should see a structured list of product details, including the name, image URL, price, and product page link, like this -

Name: Chaz Kangeroo Hoodie
Price: $52
Link: https://scrapingcourse.com/ecommerce/product/chaz-kangeroo-hoodie
Image URL: https://scrapingcourse.com/ecommerce/wp-content/uploads/2024/03/mh01-gray_main.jpg
------------------------------
Name: Teton Pullover Hoodie
Price: $70
Link: https://scrapingcourse.com/ecommerce/product/teton-pullover-hoodie
Image URL: https://scrapingcourse.com/ecommerce/wp-content/uploads/2024/03/mh02-black_main.jpg
------------------------------
Enter fullscreen mode Exit fullscreen mode

The above code will organize the raw HTML data into a structured format, making it easier to work with and preparing the output data for further processing.

Step 5: Export Product Information to CSV

You can now organize the extracted data into a CSV file, which makes it easier to analyze or share. Python's CSV module helps with this.

import csv
# Write the product information to a CSV file
with open("products.csv", mode="w", newline="") as file:
    writer = csv.DictWriter(file, fieldnames=["name", "image_url", "price", "link"])
    writer.writeheader()
    for product in products:
        writer.writerow(product)
Enter fullscreen mode Exit fullscreen mode

The above code will create a new CSV file with all the required product details.

Here's the complete code for an overview:

from selenium import webdriver
from selenium.webdriver.common.by import By
from bs4 import BeautifulSoup
import time
import csv
# Set up the WebDriver (make sure you have the appropriate driver installed, e.g., ChromeDriver)
driver = webdriver.Chrome()
# Open the page
driver.get("https://www.scrapingcourse.com/button-click")
# Loop to click the "Load More" button until there are no more products to load
while True:
    try:
        # Find the "Load more" button by its ID and click it
        load_more_button = driver.find_element(By.ID, "load-more-btn")
        load_more_button.click()
        # Wait for the content to load (adjust time as necessary)
        time.sleep(2)
    except Exception as e:
        # If no "Load More" button is found (end of products), break out of the loop
        print("No more products to load.")
        break
# Get the updated page content after all products are loaded
html_content = driver.page_source
# Close the browser window
driver.quit()
# Parse the page content with BeautifulSoup
soup = BeautifulSoup(html_content, 'html.parser')
# Extract product details
products = []
# Find all product items in the grid
product_items = soup.find_all('div', class_='product-item')
for product in product_items:
    # Extract the product name
    name = product.find('span', class_='product-name').get_text(strip=True)

    # Extract the product price
    price = product.find('span', class_='product-price').get_text(strip=True)

    # Extract the product link
    link = product.find('a')['href']

    # Extract the image URL
    image_url = product.find('img')['src']

    # Create a dictionary with the product details
    products.append({
        'name': name,
        'price': price,
        'link': link,
        'image_url': image_url
    })
# Print the extracted product details
for product in products[:2]: # You can modify the slice as needed to check more products
    print(f"Name: {product['name']}")
    print(f"Price: {product['price']}")
    print(f"Link: {product['link']}")
    print(f"Image URL: {product['image_url']}")
    print('-' * 30)
# Write the product information to a CSV file
with open("products.csv", mode="w", newline="") as file:
    writer = csv.DictWriter(file, fieldnames=["name", "image_url", "price", "link"])
    writer.writeheader()
    for product in products:
        writer.writerow(product)
Enter fullscreen mode Exit fullscreen mode

The above code will create a products.csv which would look like this:

name,image_url,price,link
Chaz Kangeroo Hoodie,https://scrapingcourse.com/ecommerce/wp-content/uploads/2024/03/mh01-gray_main.jpg,$52,https://scrapingcourse.com/ecommerce/product/chaz-kangeroo-hoodie
Teton Pullover Hoodie,https://scrapingcourse.com/ecommerce/wp-content/uploads/2024/03/mh02-black_main.jpg,$70,https://scrapingcourse.com/ecommerce/product/teton-pullover-hoodie
Bruno Compete Hoodie,https://scrapingcourse.com/ecommerce/wp-content/uploads/2024/03/mh03-black_main.jpg,$63,https://scrapingcourse.com/ecommerce/product/bruno-compete-hoodie
…
Enter fullscreen mode Exit fullscreen mode

Step 6: Get Extra Data for Top Products

Now, let's say you want to identify the top 5 highest-priced products and extract additional data (such as the product description and SKU code) from their individual pages. You can do that using the code as follows:

# Sort products by price in descending order
sorted_products = sorted(products, key=lambda x: float(x['price'].replace('$', '')), reverse=True)
# Scrape extra details for the top 5 products
driver = webdriver.Chrome()
for product in sorted_products[:5]:
    driver.get(product['link'])
    time.sleep(3)
    soup = BeautifulSoup(driver.page_source, 'html.parser')
    description = soup.find('div', class_='product-description')
    product['description'] = description.get_text(strip=True) if description else "No description"
    sku = soup.find('span', class_='sku')
    product['sku'] = sku.get_text(strip=True) if sku else "No SKU"
driver.quit()
Enter fullscreen mode Exit fullscreen mode

Here's the complete code for an overview:

from selenium import webdriver
from selenium.webdriver.common.by import By
from bs4 import BeautifulSoup
import time
import csv
# Set up the WebDriver (make sure you have the appropriate driver installed, e.g., ChromeDriver)
driver = webdriver.Chrome()
# Open the page
driver.get("https://www.scrapingcourse.com/button-click")
# Loop to click the "Load More" button until there are no more products to load
while True:
    try:
        # Find the "Load more" button by its ID and click it
        load_more_button = driver.find_element(By.ID, "load-more-btn")
        load_more_button.click()
        # Wait for the content to load (adjust time as necessary)
        time.sleep(2)
    except Exception as e:
        # If no "Load More" button is found (end of products), break out of the loop
        print("No more products to load.")
        break
# Get the updated page content after all products are loaded
html_content = driver.page_source

# Parse the page content with BeautifulSoup
soup = BeautifulSoup(html_content, 'html.parser')
# Extract product details
products = []
# Find all product items in the grid
product_items = soup.find_all('div', class_='product-item')
for product in product_items:
    # Extract the product name
    name = product.find('span', class_='product-name').get_text(strip=True)

    # Extract the product price
    price = product.find('span', class_='product-price').get_text(strip=True)

    # Convert price to a float for sorting (remove '$' or other symbols as needed)
    price_float = float(price.replace('$', '').replace(',', '').strip())

    # Extract the product link
    link = product.find('a')['href']

    # Extract the image URL
    image_url = product.find('img')['src']

    # Create a dictionary with the product details
    products.append({
        'name': name,
        'price': price,
        'price_float': price_float, # store as float for sorting
        'link': link,
        'image_url': image_url
})
# Sort products by price (descending order)
sorted_products = sorted(products, key=lambda x: x['price_float'], reverse=True)
# Get the top 5 highest-priced products
top_5_products = sorted_products[:5]
# Visit each of the top 5 product pages and extract extra data
for product in top_5_products:
    # Open the product page
    driver.get(product['link'])
    time.sleep(3) # Wait for the page to load
    # Parse the page content of the product page
    product_page_soup = BeautifulSoup(driver.page_source, 'html.parser')
    # Extract product description and SKU
    description = product_page_soup.find('div', class_='product-description')
    description_text = description.get_text(strip=True) if description else 'No description available'
    sku = product_page_soup.find('span', class_='sku')
    sku_code = sku.get_text(strip=True) if sku else 'No SKU available'
    # Add the extra data to the product details
    product['description'] = description_text
    product['sku'] = sku_code
# Close the browser window after scraping product pages
driver.quit()
# Print the extracted product details for the top 5 products
for product in top_5_products:
    print(f"Name: {product['name']}")
    print(f"Price: {product['price']}")
    print(f"Link: {product['link']}")
    print(f"Image URL: {product['image_url']}")
    print(f"Description: {product['description']}")
    print(f"SKU: {product['sku']}")
    print('-' * 30)
# Write the product information to a CSV file, including the extra data
with open("products.csv", mode="w", newline="") as file:
    writer = csv.DictWriter(file, fieldnames=["name", "image_url", "price", "link", "description", "sku"])
    writer.writeheader()
    for product in products:
        writer.writerow({
            'name': product['name'],
            'image_url': product['image_url'],
            'price': product['price'],
            'link': product['link'],
            'description': product.get('description', ''),
            'sku': product.get('sku', '')
})
Enter fullscreen mode Exit fullscreen mode

This code sorts the products by price in descending order. Then, for the top 5 highest-priced products, the script opens their product pages and extracts the product description and SKU using BeautifulSoup.

The output of the above code will be like this:

Name: Lando Gym Jacket
Price: $99
Link: https://scrapingcourse.com/ecommerce/product/lando-gym-jacket
Image URL: https://scrapingcourse.com/ecommerce/wp-content/uploads/2024/03/mj08-gray_main.jpg
Description: No description available
SKU: MJ08
------------------------------
Name: Ingrid Running Jacket
Price: $84
Link: https://scrapingcourse.com/ecommerce/product/ingrid-running-jacket
Image URL: https://scrapingcourse.com/ecommerce/wp-content/uploads/2024/03/wj04-white_main.jpg
Description: No description available
SKU: WJ04
------------------------------
Name: Zeppelin Yoga Pant
Price: $82
Link: https://scrapingcourse.com/ecommerce/product/zeppelin-yoga-pant
Image URL: https://scrapingcourse.com/ecommerce/wp-content/uploads/2024/03/mp08-green_main.jpg
Description: No description available
SKU: MP08
------------------------------
Name: Zeppelin Yoga Pant
Price: $82
Link: https://scrapingcourse.com/ecommerce/product/zeppelin-yoga-pant
Image URL: https://scrapingcourse.com/ecommerce/wp-content/uploads/2024/03/mp08-green_main.jpg
Description: No description available
SKU: MP08
------------------------------
Name: Juno Jacket
Price: $77
Link: https://scrapingcourse.com/ecommerce/product/juno-jacket
Image URL: https://scrapingcourse.com/ecommerce/wp-content/uploads/2024/03/wj06-purple_main.jpg
Description: No description available
SKU: No SKU available
------------------------------
Enter fullscreen mode Exit fullscreen mode

The above code will update the products.csv and it will now have information like this:

name,image_url,price,link,description,sku
Chaz Kangeroo Hoodie,https://scrapingcourse.com/ecommerce/wp-content/uploads/2024/03/mh01-gray_main.jpg,$52,https://scrapingcourse.com/ecommerce/product/chaz-kangeroo-hoodie,,
Teton Pullover Hoodie,https://scrapingcourse.com/ecommerce/wp-content/uploads/2024/03/mh02-black_main.jpg,$70,https://scrapingcourse.com/ecommerce/product/teton-pullover-hoodie,,
Bruno Compete Hoodie,https://scrapingcourse.com/ecommerce/wp-content/uploads/2024/03/mh03-black_main.jpg,$63,https://scrapingcourse.com/ecommerce/product/bruno-compete-hoodie,,
…
Enter fullscreen mode Exit fullscreen mode

Conclusion

Scraping pages with infinite scrolling or "Load more" buttons can seem challenging, but using tools like Requests, Selenium, and BeautifulSoup simplifies the process.

This tutorial showed how to retrieve and process product data from a demo page, saving it in a structured format for quick and easy access.

See all the code snippets here.

Top comments (0)