How to Scrape Javascript Rendered Websites with Python & Selenium

In this guide:

  • Setting up a Digital Ocean droplet with Ubuntu 16.04.
  • Installing all the software and dependencies we need including a headless Chrome.
  • Running a crawler on a Javascript rendered website.

On my quest to learn, I wanted to eventually be able to write beginner- friendly guides that really help make one feel like they can improve. Normally, we’ll get hit with very long documentations and a get started section that shows us the surface, but don’t teach us some real world possibilities right away before we invest more time into the tools. This guide will assume you have limited knowledge around the command line, the Python 3 language, and HTML. Let’s consider the user story:

“Given a website with dynamically rendered Javascript content, when I crawl it, then I want to be able to touch those generated content and not the Javascript.”

An example of a dynamically Javascript rendered content is Let’s take a look:

Notice that the data is wrapped by a <script> tag? That data is in JSON format and is rendered to HTML upon loading. We have the option to parse the JSON data, but let’s say we want to extract based on what we see or generated. Let’s write the steps on how we’d do that:

  1. Go to (be sure to check their robots.txt and terms before proceeding).
  2. Get through the landing page by entering an email address and zip code, and then click on the submit button to get to the Main Menu page.
  3. On the Main Menu Page, check if the image, name and price of each dish exists.

However, before we can do the above we need to set up our server and environment.

Setting Up Our Environment and Crawl

For our environment, we’ll be using a Digital Ocean (D.O.) virtual server or what D.O. calls them, a droplet.

  • After creating your droplet, you should get an email with your server credentials. If you set up your SSH Keys with D.O. (highly recommended), great, you can skip the next part about setting your password. Pull up your terminal and log into your server with this command, replacing “your_ip_address” with your IP address:
  • It will prompt you to agree, type “yes”, and then input the password from the D.O. email (you can copy&paste it). The server will then prompt you to change your password.
  • Now that you’re logged into your server, let’s update your system and install unzip. Run each of the 3 commands:
  • Now we’ll download, unpack, and install the latest Google Chrome browser. Run each of the 7 commands:

The below commands outlines the latest Chrome, but it doesn’t work with Selenium 2.25. Skip the code below:

cd ~

wget ""

sudo dpkg -i google-chrome-stable_current_amd64.deb
sudo rm google-chrome-stable_current_amd64.deb
sudo apt-get install -y -f

We’ll then want to get Chromedriver so we can run Chrome headlessly. Get the latest Chromedriver version with:

  • At the time of writing this post, the latest version is 2.25. I recommend using the 2.25 version before trying the latest version (EDIT: Latest version as of 2018–01–10 is 2.34 and it works). Download the 2.25 or latest Chromedriver by running the below in your terminal, replacing the version if it’s different from 2.25. It will download in a new folder you’ll create “/var/chromedriver/”. Run the 3 commands:
  • Unzip the Chromedriver with the below:
  • Upgrade PIP with the command:
  • Installing Virtualenv will allow us to create a virtual environment and install any Python packages in it without affecting our system’s Python. Go here learn more about Virtualenv:
  • Set up a virtual environment with the following command. It will create the folder /var/venv/:
  • Activate the virtual environment with:
  • You should now be in your virtual environment, identified with the (venv) tag to the left. You can check the pip version with “pip -V” and the python version with “python -V”. Both should mention python 3.5.
  • Almost there! Let’s get Selenium and PyVirtualDisplay. In your venv, run:
  • Your environment is now set up. Let’s get a script in for you to run. Create a ‘crawlers’ folder and create a ‘’ with your favorite text editor. I use vim:
  • Copy and paste the code below into the file. To be able to type or add to the file in vim, start by pressing the ‘a’ key. To save, press ‘esc’ and then type in ‘:wq!’ (without the single quotes, but with the colon) and press enter. The code:

Run the crawler with the command (reminder: we’re still in (venv)!):

That’s it! You’ve successfully run a crawler on! Your output should look like this:

Breaking Down the Crawler

I’ll break down the crawler provided above and on Github. Pasting the steps from above on what we want to do here:

1. Go to

2. Get through the landing page by entering an email address and zip code, and then click on the submit button to get to the Main Menu page.

3. On the Main Menu Page, get the image, name and price of each dish.

In the script, it will run in this order:

  • Lines #95–96: Call the MuncherySpider class and then run lines #79–91.
  • Line #80: Start the driver from lines #16–21 where an invisible/headless Chrome browser will be open with a display of 800x600.
  • Line #82: Runs lines #31–34 where the browser will go to the url passed. In this case, it’s “”. Addresses Step 1 above.
  • Line #83: Lines 37–46 will attempt to find the class=“signup-login-form” element, type in an email into the class=“user-input email” field, type in the zip code into the class=“user-input zip-code”, and finally click the element with the class=”extra-large orange button” within the parent “signup-login-form” element. Addresses Step 2:
  • The headless browser should now be in the Main Menu/Dishes page.
  • Line #84: Runs lines #48–55 where the driver will grab all “li” elements within the parent “ul” element with the class=”menu-items row”:
  • Line #51: Runs lines 57–77 where within the selected “li” element, parse out the image, title, and price (lines #63–65). Addresses Step 3:
  • Line #86: Close the driver and browser from lines #24–28.
  • Lines #99–100: Prints out the results:


The above can be used as a base for automated front-end testing. Just as you would click around to see if your website works, you can do just that with Selenium. If you want to learn more about web crawling, I recommend checking out Scrapy. This isn’t legal advice, but keep in mind to not reproduce copyrighted content and follow some best practices. As always, happy learning!

If you like this guide and would like to see more, check out my blog at

Written by

I dig things up. I like to write what I find.

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store