Web Scraper Node Js

The Ultimate Guide to Web Scraping with Node.js

So what’s web scraping anyway? It involves automating away the laborious task of collecting information from are a lot of use cases for web scraping: you might want to collect prices from various e-commerce sites for a price comparison site. Or perhaps you need flight times and hotel/AirBNB listings for a travel site. Maybe you want to collect emails from various directories for sales leads, or use data from the internet to train machine learning/AI models. Or you could even be wanting to build a search engine like Google! Getting started with web scraping is easy, and the process can be broken down into two main parts:acquiring the data using an HTML request library or a headless browser, and parsing the data to get the exact information you guide will walk you through the process with the popular request-promise module, CheerioJS, and Puppeteer. Working through the examples in this guide, you will learn all the tips and tricks you need to become a pro at gathering any data you need with! We will be gathering a list of all the names and birthdays of U. S. presidents from Wikipedia and the titles of all the posts on the front page of things first: Let’s install the libraries we’ll be using in this guide (Puppeteer will take a while to install as it needs to download Chromium as well) your first request
Next, let’s open a new text file (name the file), and write a quick function to get the HTML of the Wikipedia “List of Presidents” Chrome DevToolsCool, we got the raw HTML from the web page! But now we need to make sense of this giant blob of text. To do that, we’ll need to use Chrome DevTools to allow us to easily search through the HTML of a web Chrome DevTools is easy: simply open Google Chrome, and right click on the element you would like to scrape (in this case I am right clicking on George Washington, because we want to get links to all of the individual presidents’ Wikipedia pages):Now, simply click inspect, and Chrome will bring up its DevTools pane, allowing you to easily inspect the page’s source rsing HTML with Cheerio. jsAwesome, Chrome DevTools is now showing us the exact pattern we should be looking for in the code (a “big” tag with a hyperlink inside of it). Let’s use to parse the HTML we received earlier to return a list of links to the individual Wikipedia pages of U. check to make sure there are exactly 45 elements returned (the number of U. presidents), meaning there aren’t any extra hidden “big” tags elsewhere on the page. Now, we can go through and grab a list of links to all 45 presidential Wikipedia pages by getting them from the “attribs” section of each we have a list of all 45 presidential Wikipedia pages. Let’s create a new file (named), which will contain a function to take a presidential Wikipedia page and return the president’s name and birthday. First things first, let’s get the raw HTML from George Washington’s Wikipedia ’s once again use Chrome DevTools to find the syntax of the code we want to parse, so that we can extract the name and birthday with we see that the name is in a class called “firstHeading” and the birthday is in a class called “bday”. Let’s modify our code to use to extract these two it all togetherPerfect! Now let’s wrap this up into a function and export it from this let’s return to our original file and require the module. We’ll then apply it to the list of wikiUrls we gathered JavaScript PagesVoilà! A list of the names and birthdays of all 45 U. presidents. Using just the request-promise module and should allow you to scrape the vast majority of sites on the cently, however, many sites have begun using JavaScript to generate dynamic content on their websites. This causes a problem for request-promise and other similar HTTP request libraries (such as axios and fetch), because they only get the response from the initial request, but they cannot execute the JavaScript the way a web browser, to scrape sites that require JavaScript execution, we need another solution. In our next example, we will get the titles for all of the posts on the front page of Reddit. Let’s see what happens when we try to use request-promise as we did in the previous ’s what the output looks like:Hmmm…not quite what we want. That’s because getting the actual content requires you to run the JavaScript on the page! With Puppeteer, that’s no problem. Puppeteer is an extremely popular new module brought to you by the Google Chrome team that allows you to control a headless browser. This is perfect for programmatically scraping pages that require JavaScript execution. Let’s get the HTML from the front page of Reddit using Puppeteer instead of! The page is filled with the correct content! Now we can use Chrome DevTools like we did in the previous looks like Reddit is putting the titles inside “h2” tags. Let’s use to extract the h2 tags from the
Additional ResourcesAnd there’s the list! At this point you should feel comfortable writing your first web scraper to gather data from any website. Here are a few additional resources that you may find helpful during your web scraping journey:List of web scraping proxy servicesList of handy web scraping toolsList of web scraping tipsComparison of web scraping proxiesCheerio DocumentationPuppeteer Documentation
Learn to code for free. freeCodeCamp’s open source curriculum has helped more than 40, 000 people get jobs as developers. Get started
Build a web scraper with Node - Pusher

Build a web scraper with Node – Pusher

You will need Node 8+ installed on your machine.
Web scraping refers to the process of gathering information from a website through automated scripts. This eases the process of gathering large amounts of data from websites where no official API has been defined.
The process of web scraping can be broken down into two main steps:
Fetching the HTML source code of the website through an HTTP request or by using a headless browser.
Parsing the raw data to extract just the information you’re interested in.
We’ll examine both steps during the course of this tutorial. At the end of it all, you should be able to build a web scraper for any website with ease.
Prerequisites
To complete this tutorial, you need to have (version 8. x or later) and npm installed on your computer. This page contains instructions on how on how to install or upgrade your Node installation to the latest version.
Getting started
Create a new scraper directory for this tutorial and initialize it with a file by running npm init -y from the project root.
Next, install the dependencies that we’ll be needing too build up the web scraper:
npm install axios cheerio puppeteer –save
Here’s what each one does:
Axios: Promise-based HTTP client for and the browser
Cheerio: jQuery implementation for Cheerio makes it easy to select, edit, and view DOM elements.
Puppeteer: A library for controlling Google Chrome or Chromium.
You may need to wait a bit for the installation to complete as the puppeteer package needs to download Chromium as well.
Scrap a static website with Axios and Cheerio
To demonstrate how you can scrape a website using, we’re going to set up a script to scrape the Premier League website for some player stats. Specifically, we’ll scrape the website for the top 20 goalscorers in Premier League history and organize the data as JSON.
Create a new file in the root of your project directory and populate it with the following code:
//
const axios = require(‘axios’);
const url = ”;
axios(url)
(response => {
const html =;
(html);})
();
If you run the code with node, a long string of HTML will be printed to the console. But how can you parse the HTML for the exact data you need? That’s where Cheerio comes in.
Cheerio allows us to use jQuery methods to parse an HTML string and extract whatever information we want from it. But before you write any code, let’s examine the exact data that we need through the browser dev tools.
Open this link in your browser, and open the dev tools on that page. Use the inspector tool to highlight the body of the table listing the top goalscorers in Premier League history.
As you can see the table body has a class of. statsTableContainer. We can select all the rows using cheerio like this: $(‘. statsTableContainer > tr’). Go ahead and update the file to look like this:
const cheerio = require(‘cheerio’);
const $ = (html);
const statsTable = $(‘. statsTableContainer > tr’);
();})
Unlike jQuery which operates on the browser DOM, you need to pass in the HTML document into Cheerio before we can use it to parse the document with it. After loading the HTML, we select all 20 rows in. statsTableContainer and store a reference to the selection in statsTable. You can run the code with node and confirm that the length of statsTable is exactly 20.
The next step is to extract the rank, player name, nationality and number of goals from each row. We can achieve that using the following script:
const $ = (html)
const topPremierLeagueScorers = [];
(function () {
const rank = $(this)(‘ > strong’)();
const playerName = $(this)(‘. playerName > strong’)();
const nationality = $(this)(‘. playerCountry’)();
const goals = $(this)(‘. mainStat’)();
({
rank,
name: playerName,
nationality,
goals, });});
(topPremierLeagueScorers);})
Here, we are looping over the selection of rows and using the find() method to extract the data that we need, organize it and store it in an array. Now, we have an array of JavaScript objects that can be consumed anywhere else.
Scrape a dynamic website using Puppeteer
Some websites rely exclusively on JavaScript to load their content, so using an HTTP request library like axios to request the HTML will not work because it will not wait for any JavaScript to execute like a browser would before returning a response.
This is where Puppeteer comes in. It is a library that allows you to control a headless browser from a script. A perfect use case for this library is scraping pages that require JavaScript execution.
Let’s examine how Puppeteer can help us scrape news headlines from r/news since the newer version of Reddit requires JavaScript to render content on the page.
It appears, the headlines are wrapped in an anchor tag that links to the discussion on that headline. Although the class names have been obfuscated, we can select each headline by targeting each h2 inside any anchor tag that links to the discussion page.
Create a new file and add the following code into it:
const puppeteer = require(‘puppeteer’);
puppeteer
()
(browser => wPage())
(page => {
return (url)(function() {
return ntent();});})
(html => {
const newsHeadlines = [];
$(‘a[href*=”/r/news/comments”] > h2’)(function() {
title: $(this)(), });});
(newsHeadlines);})
This code launches a puppeteer instance, navigates to the provided URL, and returns the HTML content after all the JavaScript on the page has bee executed. We then use Cheerio as before to parse and extract the desired data from the HTML string.
Wrap up
In this tutorial, we learned how to set up web scraping in We looked at scraping methods for both static and dynamic websites, so you should have no issues scraping data off of any website you desire.
You can find the complete source code used for this tutorial in this GitHub repository.
4 Tools for Web Scraping in Node.js - Twilio

4 Tools for Web Scraping in Node.js – Twilio

Sometimes the data you need is available online, but not through a dedicated REST API. Luckily for JavaScript developers, there are a variety of tools available in for scraping and parsing data directly from websites to use in your projects and applications.
Let’s walk through 4 of these libraries to see how they work and how they compare to each other.
Make sure you have up to date versions of (at least 12. 0. 0) and npm installed on your machine. Run the terminal command in the directory where you want your code to live:
For some of these applications, we’ll be using the Got library for making HTTP requests, so install that with this command in the same directory:
Let’s try finding all of the links to unique MIDI files on this web page from the Video Game Music Archive with a bunch of Nintendo music as the example problem we want to solve for each of these libraries.
Tips and tricks for web scraping
Before moving onto specific tools, there are some common themes that are going to be useful no matter which method you decide to use.
Before writing code to parse the content you want, you typically will need to take a look at the HTML that’s rendered by the browser. Every web page is different, and sometimes getting the right data out of them requires a bit of creativity, pattern recognition, and experimentation.
There are helpful developer tools available to you in most modern browsers. If you right-click on the element you’re interested in, you can inspect the HTML behind that element to get more insight.
You will also frequently need to filter for specific content. This is often done using CSS selectors, which you will see throughout the code examples in this tutorial, to gather HTML elements that fit a specific criteria. Regular expressions are also very useful in many web scraping situations. On top of that, if you need a little more granularity, you can write functions to filter through the content of elements, such as this one for determining whether a hyperlink tag refers to a MIDI file:
const isMidi = (link) => {
// Return false if there is no href attribute.
if(typeof === ‘undefined’) { return false}
return (”);};
It is also good to keep in mind that many websites prevent web scraping in their Terms of Service, so always remember to double check this beforehand. With that, let’s dive into the specifics!
jsdom
jsdom is a pure-JavaScript implementation of many web standards for, and is a great tool for testing and scraping web applications. Install it in your terminal using the following command:
The following code is all you need to gather all of the links to MIDI files on the Video Game Music Archive page referenced earlier:
const got = require(‘got’);
const jsdom = require(“jsdom”);
const { JSDOM} = jsdom;
const vgmUrl= ”;
const noParens = (link) => {
// Regular expression to determine if the text has parentheses.
const parensRegex = /^((?! \(). )*$/;
return (link. textContent);};
(async () => {
const response = await got(vgmUrl);
const dom = new JSDOM();
// Create an Array out of the HTML Elements for filtering using spread syntax.
const nodeList = [(‘a’)];
(isMidi)(noParens). forEach(link => {
();});})();
This uses a very simple query selector, a, to access all hyperlinks on the page, along with a few functions to filter through this content to make sure we’re only getting the MIDI files we want. The noParens() filter function uses a regular expression to leave out all of the MIDI files that contain parentheses, which means they are just alternate versions of the same song.
Save that code to a file named, and run it with the command node in your terminal.
If you want a more in-depth walkthrough on this library, check out this other tutorial I wrote on using jsdom.
Cheerio
Cheerio is a library that is similar to jsdom but was designed to be more lightweight, making it much faster. It implements a subset of core jQuery, providing an API that many JavaScript developers are familiar with.
Install it with the following command:
npm install cheerio@1. 0-rc. 3
The code we need to accomplish this same task is very similar:
const cheerio = require(‘cheerio’);
const isMidi = (i, link) => {
const noParens = (i, link) => {
return (ildren[0]);};
const $ = ();
$(‘a’)(isMidi)(noParens)((i, link) => {
const href =;
(href);});})();
Here you can see that using functions to filter through content is built into Cheerio’s API, so we don’t need any extra code for converting the collection of elements to an array. Replace the code in with this new code, and run it again. The execution should be noticeably quicker because Cheerio is a less bulky library.
If you want a more in-depth walkthrough, check out this other tutorial I wrote on using Cheerio.
Puppeteer
Puppeteer is much different than the previous two in that it is primarily a library for headless browser scripting. Puppeteer provides a high-level API to control Chrome or Chromium over the DevTools protocol. It’s much more versatile because you can write code to interact with and manipulate web applications rather than just reading static data.
npm install puppeteer@5. 5. 0
Web scraping with Puppeteer is much different than the previous two tools because rather than writing code to grab raw HTML from a URL and then feeding it to an object, you’re writing code that is going to run in the context of a browser processing the HTML of a given URL and building a real document object model out of it.
The following code snippet instructs Puppeteer’s browser to go to the URL we want and access all of the same hyperlink elements that we parsed for previously:
const puppeteer = require(‘puppeteer’);
const vgmUrl = ”;
const browser = await ();
const page = await wPage();
await (vgmUrl);
const links = await page. $$eval(‘a’, elements => (element => {
return (”) && (element. textContent);})(element =>));
rEach(link => (link));
await ();})();
Notice that we are still writing some logic to filter through the links on the page, but instead of declaring more filter functions, we’re just doing it inline. There is some boilerplate code involved for telling the browser what to do, but we don’t have to use another Node module for making a request to the website we’re trying to scrape. Overall it’s a lot slower if you’re doing simple things like this, but Puppeteer is very useful if you are dealing with pages that aren’t static.
For a more thorough guide on how to use more of Puppeteer’s features to interact with dynamic web applications, I wrote another tutorial that goes deeper into working with Puppeteer.
Playwright
Playwright is another library for headless browser scripting, written by the same team that built Puppeteer. It’s API and functionality are nearly identical to Puppeteer’s, but it was designed to be cross-browser and works with FireFox and Webkit as well as Chrome/Chromium.
npm install playwright@0. 13. 0
The code for doing this task using Playwright is largely the same, with the exception that we need to explicitly declare which browser we’re using:
const playwright = require(‘playwright’);
This code should do the same thing as the code in the Puppeteer section and should behave similarly. The advantage to using Playwright is that it is more versatile as it works with more than just one type of browser. Try running this code using the other browsers and seeing how it affects the behavior of your script.
Like the other libraries, I also wrote another tutorial that goes deeper into working with Playwright if you want a longer walkthrough.
The vast expanse of the World Wide Web
Now that you can programmatically grab things from web pages, you have access to a huge source of data for whatever your projects need. One thing to keep in mind is that changes to a web page’s HTML might break your code, so make sure to keep everything up to date if you’re building applications that rely on scraping.
I’m looking forward to seeing what you build. Feel free to reach out and share your experiences or ask any questions.
Email:
Twitter: @Sagnewshreds
Github: Sagnew
Twitch (streaming live code): Sagnewshreds

Frequently Asked Questions about web scraper node js

Is node good for web scraping?

Luckily for JavaScript developers, there are a variety of tools available in Node. js for scraping and parsing data directly from websites to use in your projects and applications.Apr 29, 2020

How do I scrape a website with node js?

Steps Required for Web ScrapingCreating the package.json file.Install & Call the required libraries.Select the Website & Data needed to Scrape.Set the URL & Check the Response Code.Inspect & Find the Proper HTML tags.Include the HTML tags in our Code.Cross-check the Scraped Data.Oct 27, 2020

What is web scraping in NodeJS?

Web scraping is the technique of extracting data from websites. … While extracting data from websites can be done manually, web scraping usually refers to an automated process. Web scraping is used by most bots and web crawlers for data extraction.Dec 12, 2019

Leave a Reply

Your email address will not be published. Required fields are marked *