Introduction
We'll cover how to use Headless Chrome for web scraping Google Places. Google places does not necessarily require javascript because google will serve a different response if you disable javascript. But for better user emulation when browsing/scraping google places, a browser is recommended.
- In this post, we are going to learn web scraping with python. Using python we are going to scrape Yahoo Finance. This is a great source for stock-market data. We will code a scraper for that. Using that scraper you would be able to scrape stock data of any company from yahoo finance.
- Python web scraping tutorial (with examples) Mokhtar Ebrahim Published: December 5, 2017 Last updated: June 3, 2020 In this tutorial, we will talk about Python web scraping and how to scrape web pages using multiple libraries such as Beautiful Soup, Selenium, and some other magic tools like PhantomJS.
Headless Chrome is essentially the Chrome browser running without a head (no graphical user interface). The benefit being you can run a headless browser on a server environment that also has no graphical interface attached to it, which is normally accessed through shell access. It can also be faster to run headless and can have lower overhead on system resources.
Python is a beautiful language to code in. It has a great package ecosystem, there's much less noise than you'll find in other languages, and it is super easy to use. Python is used for a number of things, from data analysis to server programming. And one exciting use-case of Python is Web Scraping.
Controlling a browser
We need a way to control the browser with code, this can be done through what is called the Chrome DevTools Protocol or CDP. CDP is essentially a websocket server running on the browser that is based on JSONRPC. Instead of directly working with CDP we'll use a library called pyppeteer which is a python implementation of the CDP protocol that provides an easier to use abstraction. It's inspired by the Node version of the same library called puppeteer.
Setting up
As usual with any of my python projects, I recommend working in a virtual python environment which helps us address dependencies and versions separately for each application / project. Let's create a virtual environment in our home directory and install the dependencies we need.
Make sure you are running at least python 3.6.1, 3.5 is end of support.The pyppeteer library will not work with python 3.6.0, this is due to the websockets library that it depends on not supporting that python version.
Let's create the following folders and files.
We created a __main__.py
file, this lets us run the Google Places scraper with the following command (nothing should happen right now):
Launching a headless browser
We need to launch a Chrome browser. By default, pyppeteer will install the latest version of Chromium. It's also possible to just use Chrome as long as it is installed on your system. The library makes use of async/await
for concurrency. In order to use this we import the asyncio package from python.
To launch with Chrome instead of Chromium add executablePath
option to the launch function. Below, we launch the browser, navigate to google and take a screenshot. The screenshot will be saved in the folder you are running the scraper.
Digging in
Let's create some functions in core/browser.py
to simplify working with a browser and the page. We'll make use of what I believe is an awesome feature in python for simplifying management of resources called context manager
. Specifically we will use an async context manager.
An asynchronous context manager is a context manager that is able to suspend execution in its enter and exit methods.
This feature in python lets us write code like the below which handles opening and closing a browser with one line.
Let's add the PageSession
async context manager in the file core/browser.py
.
In our google-places/__main__.py
file let's make use of our new PageSession
and print the html content of the final rendered page with javascript executed.
Run the google-places
module in your terminal with the same command we used earlier.
So now we can launch a browser, open a page (a tab in chrome) and navigate to a website and wait for javascript to finish loading/executing then close the browser with the above code.
Next let's do the following:
- We want to visit
google.com
- Enter a search query for
pediatrician near 94118
- Click on google places to see more results
- Scrape results from the page
- Save results to a CSV file
Navigating pages
We want to end up on the following page navigations so we can pull the data we need.
Let's start by breaking up our code in google-places/__main__.py
so we can first search then navigate to google places. We also want to clean up some of the string literals like the google url.
You can see the new code we added above as it has been highlighted. We use XPath to find the search bar, the search button and the view all button to get us to google places.
- Type in the search bar
- Click the search button
- Wait for the view all button to appear
Web Scraping Python 3
- Click view all button to take us to google places
- Wait for an element on the new page to appear
Scraping the data with Pyppeteer
At this point we should be on the google places page and we can pull the data we want. The navigation flow we followed before is important for emulating a user.
Let's define the data we want to pull from the page.
- Name
- Location
- Phone
- Rating
- Website Link
In core/browser.py
let's add two methods to our PageSession
to help us grab the text and an attribute (the website link for the doctor).
So we added get_text
and get_link
. These two methods will evaluate javascript on the browser, the same way if you were to type it on the Chrome console. You can see that they just use the DOM to grab the text
of the element or the href
attribute.
In google-places/__main__.py
we will add a few functions that will grab the content that we care about from the page.
We make use of XPath to grab the elements. You can practice XPath in your Chrome browser by pressing F12
or right-clicking inspect to open the console.Why do I use XPath? It's easier to specify complex selectors because XPath has built in functions for handling things like finding elements which contain some text or traversing the tree in various ways.
For the phone
, rating
and link
fields we default to None
and substitute with 'N/A' because not all doctors have a phone number listed, a rating or a link. All of them seem to have a location and a name.
Because there are many doctors listed on the page we want to find the parent element and loop over each match, then evaluate the XPath we defined above.To do this let's add two more functions to tie it all together.
The entry point here is scrape_doctors
which evaluates get_doctor_details
on each container element.
In the code below, we loop over each container element that matched our XPath and we get back a Future
object by calling the function get_doctor_details
.Because we don't use the await
keyword, we get back a Future object which can be used by the asyncio.gather
call to evaluate all Future
objects in the tasks
list.
This line allows us to wait for all async
calls to finish concurrently.
Let's put this together in our main function. First we search and crawl to the right page, then we scrape with scrape_doctors
.
Saving the output
In core/utils.py
we'll add two functions to help us save our scraped output to a local CSV file.
Let's import it in google-places/__main__.py
and save the output of scrape_doctors
from our main function.
Web Scraping Python Github
We should now have a file called pediatricians.csv
which contains our output.
Wrapping up
From this guide we should have learned how to use a headless browser to crawl and scrape google places while emulating a real user.There's a lot more you can do with headless browsers such as generate pdfs, screenshots and other automation tasks.
Web Scraping Python Example
Hopefully this guide helped you get started executing javascript and scraping with a headless browser. Till next time!
- Python Web Scraping Tutorial
Web Scraping Python Links
- Python Web Scraping Resources
Web Scraping Python Reddit
- Selected Reading
Web scraping, also called web data mining or web harvesting, is the process of constructing an agent which can extract, parse, download and organize useful information from the web automatically.
This tutorial will teach you various concepts of web scraping and makes you comfortable with scraping various types of websites and their data.
This tutorial will be useful for graduates, post graduates, and research students who either have an interest in this subject or have this subject as a part of their curriculum. The tutorial suits the learning needs of both a beginner or an advanced learner.
The reader must have basic knowledge about HTML, CSS, and Java Script. He/she should also be aware about basic terminologies used in Web Technology along with Python programming concepts. If you do not have knowledge on these concepts, we suggest you to go through tutorials on these concepts first.