logo
languageENdown
menu

4 Ways to Scrape Data from a Table

5 min read

There is a lot of data presented in a table format on the web pages. However, it could be quite difficult when you try to store the data on local computers for later access. The problem would be that the data is embedded inside the HTML which is unavailable to download in a structured format like CSV. Web scraping is the easiest way to obtain data on your local computer.

In this article, you can learn 4 ways to scrape tables from websites both coding and no-coding.

Scrape Table Data Without Coding

Octoparse is a powerful web scraping tool that could help to extract data at scale within a short time. Octoparse is easy to get started with. Using drags and drops, you can easily build a workflow that scrapes the information you need from any website. It is widely used among online sellers, marketers, researchers, and data analysts. Let’s take a look at how to scrape data from a table in detail by using Octoparse advanced mode.

octoparse web scraping tool

Steps to Scrape Table Data with Octoparse

First, you need to download Octoparse and launch it, and also create an account for free.

Step 1: Click Advanced Mode to start a new project.

Step 2: Enter the target URL into the box and click “Save URL” to open the website in Octoparse built-in browser.

Step 3: Create a pagination with 3 clicks:

a) Click “B” in the browser

b) Click “Select all” in the “Action Tips” panel

c) Click “Loop click each URL”  in the “Action Tips” panel

Now, we can see a pagination loop has been created in the workflow box.

Step 4: Scrape a table with the below clicks.

a) Click on the first cell in the first row of the table

b) Click on the expansion icon from the “Action Tips” panel until the whole row is highlighted in green color (usually the tag should be TR)

c) Click on “Select all sub-elements” in the “Action Tips” panel, then “Extract data” and “Extract data in the loop”

The loop for scraping the table is built into the workflow.

Step 5: Extract data and download it in any file format.

With the above 5 steps, we’re able to get the following result.

As the pagination function is added, the whole scraping process becomes more complicated. Yet, we have to admit that Octoparse is better at dealing with scraping data in bulk.

And the most amazing part is, we don’t need to know anything about coding. That said, whether we are programmers or not, we can create our “crawler”  to get the needed data all by ourselves. To obtain further knowledge of scraping data from a table or a form, please refer to the detailed guide on How to extract data from a table/form or 3 methods to export HTML table to Excel.

Google Sheets to Scrape Table Information

In Google Sheets, there is a great function, called Import Html which is able to scrape data from a table within an HTML page using a fixed expression, =ImportHtml (URL, “table”, num).

How to Use Google Sheets to Extract Table Data

Step 1: Open a new Google Sheet, and enter the expression into a blank. A brief introduction of the formula will show up.

Step 2: Enter the URL (example: https://en.wikipedia.org/wiki/Forbes%27_list_of_the_world%27s_highest-paid_athletes) and adjust the index field as needed.

With the above 2 steps, we can have the table scraped to Google Sheets within minutes. Apparently, Google Sheets is a great way to help us scrape tables to Google Sheets directly. However, there is an obvious limitation. That would be such a mundane task if we plan to scrape tables across multiple pages using Google Sheets. Consequently, you need a more efficient way to automate the process.

R language (using rvest Package) to Scrape Table

In this case, I also use this website, https://www.babynameguide.com/categoryafrican.asp?strCat=African as an example to present how to scrape tables with rvest.

Before starting writing the codes, we need to know some basic grammars about rvest package.

html_nodes() : Select a particular part in a certain document. We can choose to use CSS selectors, like html_nodes(doc, “table td”), or xpath selectors, html_nodes(doc, xpath = “//table//td”)

html_tag() : Extract the tag name. Some similar ones are html_text (), html_attr() and html_attrs()

html_table() : Parsing HTML tables and extracting them to R Framework.

Apart from the above, there are still some functions for simulating human’s browsing behaviors. For example, html_session(), jump_to(), follow_link(), back(), forward(), submit_form() and so on.

In this case, we need to use html_table() to achieve our goal, scraping data from a table.

Download R (https://cran.r-project.org/) first.

Steps on Using R to Scrape Table Data

Step 1: Install rvest.

Step 2: Start writing codes with the below keypoints included.

Library(rvest) :  Import the rvest package

Library(magrittr) : Import the magrittr package

URL: The target URL

Read HTML : Access the information from the target URL

List: Read the data from the table

Step 3: After having all the code written in the R penal, click “Enter” to run the script. Now we can have the table information right away.

Scrape Table from Website with Python

Python is a widely used high-level programming language for general-purpose programming and data scraping. As an interpreted language, Python has a design philosophy that emphasizes code readability, and a syntax that allows programmers to express concepts in fewer lines of code than might be used in languages such as C++ or Java. Using Python to scrape data from a table/form is a good method if you are a programmer or if you are good at coding.

There are many Python libraries and modules that you can use for scraping the data from a table. Check out the links below to learn the details about how to use Python in scraping data from a table/form.

Scrape Tables From any website using Python

How to Scrape Table from Website using Python

It seems that it doesn’t take less effort in using a web scraping tool than in writing a few lines of code to extract table data. In fact, programming does have a steep learning curve which raises the threshold for people, in general, getting into the real power of web scraping. This situation makes people who don’t work in the tech industry harder to gain a competitive edge in leveraging web data.

I hope the above tutorial will help you have a general idea of how a web scraping tool can help you achieve the same result as a programmer does with ease.

Hot posts

Explore topics

image
Get web automation tips right into your inbox
Subscribe to get Octoparse monthly newsletters about web scraping solutions, product updates, etc.

Get started with Octoparse today

Download

Related Articles