How To Scrape Google Search Results

HTTP Rotating & Static

  • 40 million IPs for all purposes
  • 195+ locations
  • 3 day moneyback guarantee


How to scrape 1,000 Google search result links in 5 minutes.

Graham Onak
Owner at GainTap
Published Jun 26, 2015
This is the best way to scrape Google search results quickly, easily and for free.
In this video I show you how to use a free Chrome extension called Linkclump to quickly copy Google search results to a Google sheet. This is the best way I know how to copy links from Google.
Most crawlers don’t pull Google results, here’s why.
Scraping Google is against their terms of service. They go so far as to block your IP if you automate scraping of their search results. I’ve tried great scraping tools like with no luck. This is especially the case if you’re trying to pull search results from pages that Google hides as duplicates.
The best way to scrape Google is manually.
It may not be as fast as using a web crawler, but the fact is – it’s safe, easy and fast. I’ve used the above web scraping technique to pull 1, 000 links in 5 minutes on the couch. Here’s the rundown on what you need to do.
Download Linkclump for Chrome
Adjust your Linkclump settings – set them to “Copy to Clipboard” on action.
Open a spreadsheet
Search for a term
Right click and drag to copy all links in the selection
Copy and paste to a spreadsheet
Go to the next page of search results
Rinse and repeat
That’s it! Super easy and fast. If you don’t have the time, this makes for an excellent project to outsource to a virtual assistant.
How to scrape Google search results using Python - Practical ...

HTTP & SOCKS Rotating & Static Proxy

  • 72 million IPs for all purposes
  • Worldwide locations
  • 3 day moneyback guarantee


How to scrape Google search results using Python – Practical …

Stallions… Picture by Alex Kotliarksyi, Unsplash.
9 minutes to read
Although I suspect you are probably not technically allowed to do it, I doubt there’s an SEO in the land who hasn’t scraped Google search engine results to analyse them, or used an SEO tool that does the same thing. It’s much more convenient than picking through the SERPs to extract links by hand.
In this project, I’ll show you how you can build a relatively robust (but also slightly flawed) web scraper using
Requests-HTML that can return a list of URLs from a Google search, so you can analyse the URLs in your technical
SEO projects.
If you just want a quick, free way to scrape Google search results using Python, without paying for a SERP API
service, then give my EcommerceTools package a try. It lets you scrape Google search results in three lines of
code. Here’s how it’s done.
Load the packages
First, open up a Jupyter notebook and import the below packages. You’ll likely already have requests, urllib, and pandas, but you can install requests_html by entering pip3 install requests_html, if you don’t already have it.
import requests
import urllib
import pandas as pd
from requests_html import HTML
from requests_html import HTMLSession
Get the page source
Next, we’ll write a little function to pass our URL to Requests-HTML and return the source code of the page. This first creates a session, then fetches the response, or throws an exception if something goes wrong. We’ll scrape the interesting bits in the next step.
def get_source(url):
“””Return the source code for the provided URL.
url (string): URL of the page to scrape.
response (object): HTTP response object from requests_html.
session = HTMLSession()
response = (url)
return response
except questException as e:
Scrape the results
This is the bit where things get interesting, and slightly hacky. I suspect Google does not like people scraping their search results, so you’ll find that there are no convenient CSS class names we can tap into. Those that are present, seem to change, causing scrapers to break. To work around this I’ve used an alternate approach, which is more robust, but does have a limitation.
First, we’re using () to URL encode our search query. This will add + characters where spaces sit and ensure that the search term used doesn’t break the URL when we append it. After that, we’ll combine it with the Google search URL and get back the page source using get_source().
Rather than using the current CSS class or XPath to extract the links, I’ve just exported all the absolute URLs from the page using This is more resistant to changes in Google’s source code, but it means there will be Google URLs also present.
Since it’s only non-Google content in which I’m interested, I’ve removed any URLs with a Google-related URL prefix. The downside is that it will remove legitimate Google URLs in the SERPs.
def scrape_google(query):
query = (query)
response = get_source(” + query)
links = list()
google_domains = (‘. ‘,
‘google. ‘,
‘. ‘)
for url in links[:]:
if artswith(google_domains):
return links
Running the function gives us a list of URLs that were found on the Google search results for our chosen term, with any Google-related URLs removed. This obviously isn’t a perfect match for the actual results, however, it does return the non-Google domains in which I’m interested.
scrape_google(“data science blogs”)
You can tweak the code accordingly to extract only the links from certain parts of the SERPs, but you’ll find that you’ll need to update the code regular as the source code is changed frequently. For what I needed, this did the job fine.
Want the text instead?
If you’re after the title, snippet, and the URL for each search engine result, try this approach instead. First, create a function to format and URL encode the query, send it to Google and show the output.
def get_results(query):
Next, we’ll parse the response HTML. I’ve pored over the obfuscated HTML and extracted the current CSS values that hold the values for the result, the title, the link, and the snippet text. These change frequently, so this may not work in the future without adjusting these values.
def parse_results(response):
css_identifier_result = “. tF2Cxc”
css_identifier_title = “h3″
css_identifier_link = ” a”
css_identifier_text = “”
results = (css_identifier_result)
output = []
for result in results:
item = {
‘title’: (css_identifier_title, first=True),
‘link’: (css_identifier_link, first=True)[‘href’],
‘text’: (css_identifier_text, first=True)}
return output
Finally, we’ll wrap up the functions in a google_search() function, which will put everything above together and return a neat list of dictionaries containing the results.
def google_search(query):
response = get_results(query)
return parse_results(response)
results = google_search(“web scraping”)
[{‘title’: ‘What is Web Scraping and What is it Used For? | ParseHub’,
‘link’: ”,
‘text’: ”},
{‘title’: ‘Web scraping – Wikipedia’,
‘text’: ‘Web scraping, web harvesting, or web data extraction is data scraping used for extracting data from websites. Web scraping software may access the World\xa0… \n\u200eHistory · \u200eTechniques · \u200eSoftware · \u200eLegal issues’},
{‘title’: ‘Web Scraper – The #1 web scraping extension’,
‘text’: ‘The most popular web scraping extension. Start scraping in minutes. Automate your tasks with our Cloud Scraper. No software to download, no coding needed. \n\u200eWeb Scraper · \u200eCloud · \u200eTest Sites · \u200eDocumentation’},
{‘title’: ‘Web Scraper – Free Web Scraping’,
‘text’: ’23 Sept 2020 — With a simple point-and-click interface, the ability to extract thousands of records from a website takes only a few minutes of scraper setup. Web\xa0… ‘},
{‘title’: ‘Python Web Scraping Tutorials – Real Python’,
‘text’: ‘Web scraping is about downloading structured data from the web, selecting some of that data, and passing along what you selected to another process. ‘},
{‘title’: ‘ParseHub | Free web scraping – The most powerful web scraper’,
‘text’: ‘ParseHub is a free web scraping tool. Turn any site into a spreadsheet or API. As easy as clicking on the data you want to extract. ‘},
{‘title’: ‘Web Scraping Explained – WebHarvy’,
‘text’: ‘Web Scraping (also termed Screen Scraping, Web Data Extraction, Web Harvesting etc. ) is a technique employed to extract large amounts of data from websites\xa0… ‘},
{‘title’: ‘What Is Web Scraping And How Does Web Crawling Work? ‘,
‘text’: ‘Web scraping, also called web data extraction, is the process of extracting or scraping data from websites. Learn about web crawling and how it works. ‘},
{‘title’: “A beginner’s guide to web scraping with Python | Opensource… “,
‘text’: “22 May 2020 — Setting a goal for our web scraping project. Now we have our dependencies installed, but what does it take to scrape a webpage? Let’s take a\xa0… “}]
If you want to quickly scrape several pages of Google search results, rather than just the first page of results,
check out EcommerceTools instead, or adapt the code above to support pagination.
Matt Clarke, Saturday, March 13, 2021
How to scrape Google SERPs - Apify Blog

How to scrape Google SERPs – Apify Blog

Does Google even need an introduction? If our phones have become extensions of our hands, then Google is one of the main reasons for that evolution. These days, Google is the synonym for answers, speed, and accessibility – but also simply for billions. Billions of dollars, users, accounts, visitors, devices, clicks, minutes, gigabytes, searches – you name it – and of course, data measured in billions of terabytes. And that data can be extracted automatically and effortlessly, if you know the right methods and have the right tools at this brief how-to article, we’re going to show you exactly how to scrape the biggest library in the world by using a ready-made tool on the Apify platform called Google Search Results Scraper. This is your step-by-step guide to how to scrape any information available from Google, including organic and paid results, ads, queries, People Also Ask boxes, prices, and reviews. Let’s get started! What are Google SERPs? A Google SERP is the page containing the list of search results that Google displays to you when you type in your query and hit Enter. SERP in this case stands for Search Engine Results Page, and you’ll find SERPs not only on Google, which controls 90% of the search engine market, but also on other search engines, such as Bing, Yahoo, Yandex, and others. We need to know this term in order to know how to use web scraping on the Google Search Engine. You can consider the terms Google page, Google search page, and Google SERP to be equal and interchangeable, but we’ll stick with Google SERP in order to remain technically SERPs have changed a lot over the years, with the most prominent features being those infoboxes we all know too well – Knowledge Graphs, as well as the Carousel – something so ubiquitous these days that we can’t imagine the Google SERP interface looking any other way. Both of those now-classic Google SERP features were part of the Hummingbird algorithm release in ’s a far cry from the 2003 version of Google results. Does this prehistoric SERP interface ring a bell? Luckily, we’re not there SERP interface in the beginning of the 2000sGoogle SERP structure – how to scrape GoogleIn order to work out how to scrape Google, we first need to understand how it sees and prioritizes our searches. What you see when you search for things on Google is not just an index of pages with URLs, or so-called organic search. While it used to be like that in the past, as we’ve seen, the main purpose and driving force of Google – or any search engine for that matter – has always been to have your queries answered as quickly and efficiently as possible, and in a way that will attract your attention and be easy on the eyes. That’s why over time the search results have become much more multilayered, including the results of different complexity and formats, sort of like a huge cake. And that cake-like structure isn’t going away any time soon, with voice command search, apps, and mobile search introducing their significant corrections into the way we google stuff. Today, Google Search results consist of various levels, depending on the complexity and type of search, as you can see in this example of a string theory query:As you can see, the Google search page is now packed with various content: featured snippets, so-called snap packs, ads, and organic results. Additional types may also show up: product ads, related searches, and multiple snap pack types (Wikipedia, Maps, videos, etc) SERP APINow why would you need an API to extract data from Google? Technically, you can fish out some insights into the way Google works and displays results without the need to use any specific SEO tools: just google your keyword and see what you get. But there are two problems with this approach: first, the process is pretty time-consuming to do manually and at scale – an inefficient monkey job, essentially. Second, the results you get can’t be considered objective. At the beginning of the 2000s, when the Google SERPs were first introduced, they looked much the same to each user for the localized Google version per each country. Now Google algorithms give out customized results tailored to each user, taking into account many factors, such as:Type of device: If a user is searching using their smartphone, the search results will look different, since starting from 2015 Google prefers showing web pages that are gistration: if a Google user is logged into their account, what they see on SERPs will be aligned with their history and user behavior, provided that’s allowed within their data-related owser history: if a user rarely empties their browser cache, Google will include that information concerning previous search queries with cookies, and adjust the results. Location: if the geolocalization option is activated, Google aligns the SERPs with the user’s location. That’s why search results for the sushi takeaway query in Prague will be different from those in Los Angeles. If we’re talking about local search, the results will be a combination of data from Google Search and Google solution to both manual work and this lack of objectivity is an automated crawler that is simple enough to use, but also complex enough to scrape such a massive website as Google. In other words, a SERP API – that’s a lot of Caps, but essentially it’s a program that will automatically collect data from Google SERP for you to analyze and use. This is exactly what our Google Search Result Scraper is created for. Our SERP API supports the extraction of all data on:organic and paid resultsadsqueriesPeople Also AskpricesreviewsIf you need additional attributes, you can also include a short snippet of JavaScript code to extract additional attributes from the HTML. Or perhaps you need something extra? You can also submit a request for a custom Google scraping scrape Google? Google is the main entry point to the internet for billions of people. This makes appearing in Google Search results a key factor for almost every business. And Google reviews and ratings have a massive impact on local businesses’ online profiles. Marketing agencies, especially those with a large number of clients from various industries, rely heavily on obtaining reliable SEO tools. They are not only a means of effectively performing various tasks, but also a means of successful management and analysis of results. You can look for things like how the top-ranking pages are writing their page titles, the keywords they’re targeting, how they format their content, or take it a stage further and do some deeper link analysis. Typical use cases for Google Search scraping are, among thousands of others:Search engine optimization (SEO) — monitor how your website performs in Google for certain queries over a period of timeAnalyze ads for a given set of keywordsMonitor your competition in both organic and paid resultsBuild a URL list for certain keywords. This is useful if you, for example, need good relevant starting points when scraping web pages containing specific phrasesAnd if you’re out of ideas of what to do with all that extracted data, visit our Industry pages for inspiration, with clear examples of how to use the results of web scraping for business and about the official Google Search API? That’s a funny question. Google doesn’t provide its own SERP API for web search – so Google doesn’t make it that easy to extract data from Google at scale. Moreover, only a limited subset of information available on any search results page can be provided to you via Google services such as Google Ads or Google Analytics. The two official methods suggested by Google for getting data are Google Custom Search API (deprecated in April 2018) or scraping by URLFetch method. Now that we’ve covered all the aspects and reasons for scraping Google, let’s get started with the tutorial itself. Promise it won’t take long:)Step-by-step guide to scraping Google SERPs1. Go to Apify’s website:. Sign in at the top-right corner using your email account, Google, or GitHub. 3. When you log in, you’ll be redirected to your Apify Console. Find the Google Search Scraper card and click on it. 4. Now you’re on the page for the Google Search Results Scraper. Scroll down to get familiar with its parameters and possibilities. 5. Come back to the top of the page and click the blue Create new task button. It will redirect you to new input parameters of your scraper. Note that your scraper can be found in the Actors tab on the left, because that’s what we call our customized scrapers. You can read more about those terms here. 6. You’ll find yourself on a new page with plenty of options for your first scraping session. Once you’ve figured out all the input parameters such as Country, Language, Results per page, etc. just click the green Run button. 7. Our actor will start running its task and will change its status to Running. 8. When the scraper finishes its run, you will see its status change to Succeeded as well as how many results your scraping has brought you. 9. To preview your results, head over to the Dataset tab. This tab contains your data in lots of versatile formats, including HTML table, JSON, CSV, Excel, XML, and RSS feed. You can see them by clicking on View in another tab or Preview. Let’s preview our results in JSON format. 10. To download your dataset results, pick the format and click Download. You can then share that data or upload it anywhere you like. Use it in spreadsheets, other programs or apps, or your own projects. Congrats, you’ve just completed your first scraping session:)Google SERP proxiesApify has proxies designed specifically for SERPs. Our proxies will make your scraping much faster and you’ll be able to dynamically switch between countries, so that you can get search information from any location. If you sign up for a free Apify account, you get a 30-day free trial of our SERP proxy service. Now that you’re all ready, go ahead and start your first month with Apify by using our free Google Search Scraper on the Apify Store. Don’t forget to send us a tweet if you do something interesting with all that data:)If you need to scrape other parts of the Google giant, we have two other amazing scrapers: our Google Trends Scraper and Google Trending Searches Scraper are ready to help you keep track of emerging trends and ideas. Scraping Google Trends can be useful both for research, business, and personal interest and entertainment. If you need some inspiration on how to use these and our other SEO tools, check out 5 powerful scrapers to add to your SEO tool kit.

Frequently Asked Questions about how to scrape google search results

Leave a Reply

Your email address will not be published.