Anti Scraping Technology

Anti-scraping techniques and how to bypass them – Apify …

We explore web scraping prevention and methods used to bypass blocking, such as IP address rotation and proxies, emulate browser signatures or shared IP address websites use anti-scraping techniques to block web scraping bots. Our research shows that there are a number of methods deployed in the field to bypass these defenses.
In many cases, we found that very simple changes in approach are commonly used. For example, if a site is blocking based on IP address, switching between different addresses is effective. If a website is analyzing behavior, making that behavior as human-like as possible will confuse the anti-scraping system. If these simpler options fail, there are more complex methods available, such as shared IP address emulation (also known as session multiplexing).
IP address-based blockingA popular option some websites use is blocking access based on the IP range your address belongs to. This kind of protection aims to reduce the amount of non-human traffic. For instance, websites will deny access to ranges of Amazon Web Services’s IP addresses and other commonly known ranges.
Bypassing IP address-based blockingWe found that web scraping can work around IP address-based blocking by rotating the IP addresses from which they send the requests to target websites. This can be done by using a pool of proxy servers by assigning each request another proxy server from the pool and thus making it look like a request coming from another user. The proxies can be selected either randomly or in round-robin fashion.
This method’s effectiveness depends on various factors, such as the number of web pages that are being scraped, the sophistication of the scraping protection and the number and type of proxies. If too many requests are sent from a single proxy in too short a period of time, the proxy might get “burned”, which means all further requests from it are blocked.
Our research determined that, for successful large-scale scraping activities, it is essential to have a sufficient pool of proxies and to time the workload to maximize the scraping throughput while burning proxies.
Apify Proxy enables you to enhance your data throughput and access websites from any geographical location by using an extensive pool of datacenter and residential proxies.
IP rate limitingWhen crawling a website, a web scraping bot will typically send many more requests from a single IP address than a human user could generate over the same period. Websites can easily monitor how many requests they receive from a single IP address. If the number of requests exceeds a certain limit, websites can block that IP address or require a CAPTCHA test.
Bypassing IP rate limitingWe found that two ways are used to work around rate limiting. One method is to limit how many pages on a single site are scraped concurrently, with delays possibly even being intentionally introduced (after reaching the original limit). The other method is to use proxy servers and rotate IP addresses after a certain number of requests.
Apify actors are designed to reduce the workload on websites being scraped. To lower the concurrency when using the Apify SDK, just pass the maxConcurrency option to your crawler’s setup. If you use actors from Apify Store, you can usually set the maximum concurrency in the actor’s input.
HTTP request analysisEach HTTP request sent from a client to a web server contains a lot of hidden information such as
HTTP headers, client IP address,
SSL/TLS version or a list of supported
TLS ciphers. Even the structure of the HTTP request itself, e. g. the order of the HTTP headers, can tell whether the request comes from a real web browser or a script.
Websites can check for these signals and block requests that do not have the signature of a known web browser or show a CAPTCHA. Our research shows that this kind of protection is commonly bypassed by the use of only plain HTTP requests, because the protection does not collect any window attributes or evaluate any JavaScript code.
Bypassing HTTP request analysisA straightforward method frequently used to circumvent HTTP request analysis is to use a real web browser, such as headless Chrome, to emulate browser HTTP signatures. However, this is inefficient, as web browsers consume a lot of system resources and are generally slow.
A more efficient method used is to emulate browser HTTP request signatures even when using a low-level HTTP request library. This makes the scripted HTTP request look like a real web browser, but is much faster and more efficient. Note that we found that this method only works in situations where the page content is served directly in the first HTML response and not loaded later using AJAX.
To test this, we used Apify SDK, which provides a requestAsBrowser() function that emulates the Firefox browser’s HTTP headers.
User behavior analysisRather than analyzing and reacting to client requests in real time, websites can collect user behavior data over longer periods and then react to it only when sufficient information is available.
Such data can contain the order in which pages are visited, how long the user stays on each page, mouse movements or even how fast forms are filled in. If enough evidence indicates that the user’s behavior is not human, websites can block the client IP address or serve a CAPTCHA.
Browser fingerprintingWebsites can also use various techniques to test whether a client’s web browser is being used by a human user or a robot, and even identify repeated visits of the same web browser. This is known as browser fingerprinting and it can range from very primitive JavaScript challenges to state-of-the-art browser integrity tests and behavioral analyses.
The tests look for things like information about your browser type and version, operating system, installed browser extensions, available fonts, timezone, among others. Combined, all this information forms a browser’s “fingerprint”.
While this information may seem quite generic, Panopticlick found that on average only 1 in 286, 777 browsers will have the same fingerprint as you.
Combinations of the above techniquesTo make things complicated, websites often employ various scraping protection combinations such as IP address-based blocking and HTTP request analysis.
IP address rotation and emulation of browser HTTP signatures can be effective for many web scraping tasks, but large-scale crawls will get blocked. Using more proxies is a solution to this, but that can be expensive.
Shared IP address emulation can dramatically extend the effectiveness of scraping and multiply the number of pages that can be fetched. The technique relies on websites knowing that many different users can be behind a single IP address.
For example, requests from mobile devices are usually only routed through a handful of IP addresses, while users behind a single corporate firewall might all have a single IP address. By emulating and managing these user sessions per IP address, we found that it is possible to prevent websites from aggressive blocking.
To make it work, a single user session has to always be routed via the same IP address. A website can identify such user sessions based on cookies, authentication tokens or a browser HTTP signature/fingerprint.
Our research into this was assisted by the Apify SDK’s SessionPool class. This can be added to other Apify tools such as actors or proxy but it also works outside of the Apify ecosystem.
Comparing ways of bypassing scraping protectionIn a recent experiment, we found session emulation to be at least twice as effective as plain IP address rotation.
5 Anti-Scraping Techniques You May Encounter | Octoparse

5 Anti-Scraping Techniques You May Encounter | Octoparse

With the advent of big data, people start to obtain data from the Internet for data analysis with the help of web crawlers. There are various ways to make your own crawler: extensions in browsers, python coding with Beautiful Soup or Scrapy, and also data extraction tools like Octoparse.
However, there is always a coding war between spiders and anti-bots. Web developers apply different kinds of anti-scraping techniques to keep their websites from being scraped. In this article, I have listed the five most common anti-scraping techniques and how they can be avoided.
5 Anti-scraping Techniques
IP
Captcha
Log-in
UA (User-Agent)
AJAX
1. IP
One of the easiest ways for a website to detect web scraping activities is through IP tracking. The website could identify whether the IP is a robot based on its behaviors. when a website finds out that an overwhelming number of requests had been sent from one single IP address periodically or within a short period of time, there is a good chance the IP would be blocked because it is suspected to be a bot. In this case, what really matters for building an anti-scraping crawler is the number and frequency of visits per unit of time. Here are some scenarios you may encounter.
Scenario 1: Making multiple visits within seconds. There’s no way a real human can browse that fast. So, if your crawler sends frequent requests to a website, the website would definitely block the IP for identifying it as a robot.
Solution: Slow down the scraping speed. Setting up a delay time (e. g. “sleep” function) before executing or increasing the waiting time between two steps would always work.
Scenario 2: Visiting a website at the exact same pace. Real human does not repeat the same behavioral patterns over and over again. Some websites monitor the request frequency and if the requests are sent periodically with the exact same pattern, like once per second, the anti-scraping mechanism would very likely be activated.
Solution: Set a random delay time for every step of your crawler. With a random scraping speed, the crawler would behave more like how humans browse a website.
Scenario 3: Some high-level anti-scraping techniques would incorporate complex algorithms to track the requests from different IPs and analyze their average requests. If the request of an IP is unusual, such as sending the same amount of requests or visiting the same website at the same time every day, it would be blocked.
Solution: Change your IP periodically. Most VPN services, cloud servers, and proxy services could provide rotated IPs. When requests are being sent through these rotated IPs, the crawler behaves less like a bot, which could decrease the risk of being blocked.
About web scraping challenges:
9 Web Scraping Challenges You Should Know
Web Scraping Challenges and Workarounds
Web Scraping 10 Myths that Everyone Should Know
2. Captcha
Have you ever seen this kind of image when browsing a website?
a click
to select specific pictures
to type in/select the right string
These images are called Captcha. Captcha stands for Completely Automated Public Turing test to tell Computers and Humans Apart. It is a public automatic program to determine whether the user is a human or a robot. This program would provide various challenges, such as degraded image, fill-in-the-blanks, or even equations, which are said to be solved by only a human.
This test has been evolving for a long time and currently many websites apply Captcha as anti-scraping techniques. It was once very hard to pass Captcha directly. But nowadays, many open-source tools can now be applied to solve Captcha problems though they may require more advanced programming skills. Some people even build their own feature libraries and create image recognition techniques with machine learning or deep learning skills to pass this check.
It is easier to not trigger it than solve it
For most people, the easiest way is to slow down or randomize the extracting process in order to not trigger the Captcha test. Adjusting the delay time or using rotated IPs can effectively reduce the probability of triggering the test.
3. Log in
Many websites, especially social media platforms like Twitter and Facebook, only show you information after you log in to the website. In order to crawl sites like these, the crawlers would need to simulate the logging steps as well.
After logging into the website, the crawler needs to save the cookies. A cookie is a small piece of data that stores the browsing data for users. Without the cookies, the website would forget that you have already logged in and would ask you to log in again.
Moreover, some websites with strict scraping mechanisms may only allow partial access to the data, such as 1000 lines of data every day even after log-in.
Your bot needs to know how to log-in
1) Simulate keyboard and mouse operations. The crawler should simulate the log-in process, which includes steps like clicking the text box and “log in” buttons with mouse, or typing in account and password info with the keyboard.
2) Log in first and then save the cookies. For websites that allow cookies, they would remember the users by saving their cookies. With these cookies, there is no need to log in again to the website in the short term. Thanks to this mechanism, your crawler could avoid tedious login steps and scrape the information you need.
3) If you, unfortunately, encounter the above strict scaping mechanisms, you could schedule your crawler to monitor the website at a fixed frequency, like once a day. Schedule the crawler to scrape the newest 1000 lines of data in periods and accumulate the newest data.
4. UA
UA stands for User-Agent, which is a header for the website to identify how the user visits. It contains information such as the operating system and its version, CPU type, browser, and its version, browser language, a browser plug-in, etc.
An example UA: Mozilla/5. 0 (Macintosh; Intel Mac OS X 10_7_0) AppleWebKit/535. 11 (KHTML, like Gecko) Chrome/17. 0. 963. 56 Safari/535. 11
When scraping a website, if your crawler contains no headers, it would only identify itself as a script (e. if using python to build the crawler, it would state itself as a python script). Websites would definitely block the request from a script. In this case, the crawler has to pretend itself as a browser with a UA header so that websites could provide access for it.
Sometimes website shows different pages or information to different browsers or different versions even if you enter the site with the same URL. Chances are the information that is compatible with one browser while the other browsers are blocked. Therefore, to make sure you can get into the right page, multiple browsers and versions would be required.
Switch between different UA’s to avoid getting blocked
Change UA information until you find the right one. Some sensitive websites which apply complex anti-scraping techniques may even block the access if using the same UA for a long time. In this case, you would need to change the UA information periodically.
5. AJAX
Nowadays, more websites are developed with AJAX instead of traditional web development techniques. AJAX stands for Asynchronous JavaScript and XML, which is a technique to update the website asynchronously. Briefly speaking, the whole website doesn’t need to reload when only small changes take place inside the page.
So how could you know whether a website applies AJAX?
A website without AJAX: The whole page would be refreshed even if you only make a small change on the website. Usually, a loading sign would appear, and the URL would change. For these websites, we could take advantage of the mechanism and try to find the pattern of how the URLs would change. Then you could generate URLs in batches and directly extract information through these URLs instead of teaching your crawler how to navigate websites like humans.
A website with AJAX: Only the place you click will be changed and no loading sign would appear. Usually, the web URL would not change so the crawler has to deal with it in a straightforward way.
For some complex websites developed by AJAX, special techniques would be needed to find out unique encrypted ways on those websites and extract the encrypted data. Solving this problem could be time-consuming because the encrypted ways vary on different pages. If you could find a browser with build-in JS operations, then it could automatically decrypt the website and extract data.
Web scraping and anti-scraping techniques are making progress every day. Perhaps these techniques would be outdated when you are reading this article. However, you could always get help from us, from Octoparse. Here at Octoparse, our mission is to make data accessible to anyone, in particular, those without technical backgrounds. As a web-scraping tool, we can provide you ready-to-deploy solutions for all these five anti-scraping techniques. Feel free to contact us when you need a powerful web-scraping tool for your business or project!
Author: Jiahao Wu
Cite:
Megan Mary Jane. 2019. How to bypass anti-scraping techniques in web scraping. Retrieved from:
Artículo en español: 5 Técnicas Anti-Scraping que Puedes EncontrarTambién puede leer artículos de web scraping en El Website Oficial
5 Anti-Scraping Techniques You May Encounter
How to Bypass Anti-Scraping Tools on Websites - Datahut Blog

How to Bypass Anti-Scraping Tools on Websites – Datahut Blog

It is this era of tremendous competition; enterprises use all methods within their power to get ahead. For businesses, the unique tool to achieve this supremacy is Web scraping. But this too isn’t a field without hurdles. Websites employ various anti-scraping techniques to block you from scraping their websites. But there is always a way around. What do we know about Web Scraping? The WWW harbors more websites than you can imagine. Some of them might be of the same domain as yours. For example, both Amazon and Flipkart are e-commerce websites. Such websites become your rivals, even without trying. So when it comes to tasting success, you need to identify your competitions and conquer them. So what methods can help you get that edge over a million others working in the same domain? The answer is web scraping. Web scraping is nothing but collecting data from various websites. You can extract information, such as product pricing and discounts. The data that you acquire can help in enhancing the user experience. This usage, in return, will ensure that the customers prefer you over your competitors. For instance, your e-commerce company sells software. You need to understand how you can improve your product. For this, you will have to visit websites that sell software and find out about their products. Once you do this, you can also verify your competitor’s costs. Eventually, you can decide at what price will you place your software and what features need to be improved. This process applies to almost any product.
What are Anti-Scraping Tools and How to Deal With Them? As a growing business, you will have to target popular and well-established websites. But the task of web scraping becomes difficult in such cases. Why? Because these websites employ various anti-scraping techniques to block your way. What do these anti-scraping tools do? Websites harbor much information. Genuine visitors might use this information to learn something or to select the product they want to buy. But not-so-genuine visitors like competitor websites can use this information to get a competitive advantage. This is why websites use anti-scraping tools to keep their competitors at bay. Anti-scraping tools can identify the non-genuine visitors and prevent them from acquiring data for their use. These anti-scraping techniques can be as simple as IP address detection and as complex as Javascript verification. Let us look at a few ways of bypassing even the strictest of these anti-scraping tools. 1. Keep Rotating your IP AddressThis is the easiest way to deceive any anti-scraping tool. An IP address is like a numerical identifier assigned to a device. One can easily monitor it when you visit a website to perform web scraping. Most websites keep in check the IP addresses visitors use to surf them. So, while doing the enormous task of scraping a large site, you should keep several IP addresses handy. You can think of this as using a different face mask each time you go out of your house. By using a number of these, none of your IP addresses will get blocked. This method comes in handy with most of the websites. But a few high-profile sites use advanced proxy blacklists. That is where you need to act smarter. Residential or mobile proxies are reliable alternatives here. Just in case you are wondering, there are several kinds of proxies. We have a fixed number of IP addresses in the world. Yet, if you somehow manage to have 100 of them, you can easily visit 100 websites without arousing any suspicion. So, the most crucial step is to find yourself the right proxy service provider. Also Read: How to Select a Web Scraping Service that is On Point2. Use a Real User AgentUser agents are a type of HTTP header. Their primary function is to decipher which browser are you using to visit a website. They can easily block you, in case you are using a website that isn’t major. For instance, a few significant sites can be Chrome and Mozilla Firefox. Most scrapers ignore this point. You can cut down your chances of getting blacklisted by setting a user agent that seems genuine and well-known. You can easily find yourself one from the list of user agents. In case yours is an advanced website, Googlebot User Agent can help you out. Your request will allow Googlebot to go through your site. Additionally, this will list you on google. A user agent works best when it is up-to-date. All browsers use a different set of user-agent. In case you fail to stay updated, you will arouse suspicion which you don’t want. Rotating between a few user agents can give you an upper hand too.
3. Keep Random Intervals Between Each RequestA web scraper is like a robot. Web scraping tools will send requests at regular intervals of time. Your goal should be to appear as human as possible. Since humans don’t like routine, it is better to space out your requests at random intervals. This way, you can easily dodge any anti-scraping tool on the target website. Make sure that your requests are polite. In case you send requests frequently, you can crash the website for everyone. The goal is not to overload the site at any instance. As an example, Scrapy has a mandatory requirement of sending out requests slowly. As additional security, you can refer to the of a website. These documents have a line specifying crawl-delay. Accordingly, you can understand how many seconds you need to wait to avoid generating high server traffic.
4. A Referer Always HelpsAn HTTP request header that specifies which site you redirected from is a referrer header. This can be your lifesaver during any web scraping operation. Your goal should be to appear as if you are coming directly from google. The header “Referer”: “ can help you do this. You can even change this as you change countries. For example, in the UK, you can use “. Many sites affiliate certain referrers to redirect traffic. You can use a tool like Similar Web to find the common referrer for a website. These referrers are usually social media sites like Youtube or Facebook. Knowing the referrer will make you appear more authentic. The target site will think that the site’s usual referrer redirected you to their website. Therefore, the target website will classify you as a genuine visitor and won’t think of blocking you. Also Read: Cost Control for Web Scraping Projects5. Avoid any Honeypot TrapsAs robots got smarter, so did the website handlers. Many of the websites put invisible links that your scraping robots would follow. By intercepting these robots, websites can easily block your web scraping operation. To safeguard yourself, try to look for “display: none” or “visibility: hidden” CSS properties in a link. If you detect these properties in a link, it is time to backtrack. By using this method, websites can identify and trap any programmed scraper. They can fingerprint your requests and then block them permanently. This is the method that masters of web security use against web crawlers. Try to check each page for any such properties. Webmasters also use tricks like changing the color of the link to that of the background. In such cases, for additional security look for properties like “color:#fff” or “color:#ffffff”. This way, you can even save yourselves from links that have been rendered invisible. 6. Prefer Using Headless BrowsersThese days websites use all sorts of trickery to verify if the visitor is genuine. For instance, they can use browser cookies, Javascript, extensions, and fonts. Performing web scraping these websites can be a tedious job. In such cases, a headless browser can be your lifesaver. Many tools are available that can help you design browsers identical to the one used by a real user. This step will help you avoid detection entirely. The only milestone in this method is the design of such websites because it takes more caution and time. But as a result, it makes for the most effective way to go undetected while scraping a website. The drawback of such smart tools is their memory and CPU intensive properties. Resort to these types of tools only when you can find no means to avoid getting blacklisted by a website.
7. Keep Website Changes in CheckWebsites can change layouts for various reasons. Most of the time, sites do so to block websites from scraping them. Websites can include designs at random places. This method is used even by the big shot websites. So the crawler that you are using should be able to understand these changes well. Your crawler needs to be able to detect these ongoing changes and continue to perform web scraping. Monitoring the number of successful requests per crawl can help you do this easily.
Another method to ensure ongoing monitoring is by writing a unit test for a specific URL on the target site. You can use one URL from each section of the website. This method will help you detect any such changes. Only a few requests sent every 24 hours will help you avoid any pause in the scraping procedure.
8. Employ a CAPTCHA Solving ServiceCaptchas are one of the most widely used anti-scraping tools. Most of the time, crawlers cannot bypass the captchas on websites. But as a recluse, many services have been designed to help you in carrying out web scraping. A few of these are the captcha solving solutions like AntiCAPTCHA. Websites that require CAPTCHA makes it mandatory for crawlers to use these tools. Some of these services might be very slow and expensive. So you will have to choose wisely to ensure that this service isn’t too extravagant for you.
9. Google Cache can be a Source tooThroughout the WWW, there is a large amount of stationary data (data that doesn’t change much with time). In such instances, Google’s cached copy can be your last resort for web scraping. By using cached copies, you can directly perform data acquisition. This method is at times easier as compared to scraping websites. Just add “ as a prefix to your URL, and you are ready to go. This option comes in handy for websites that are pretty hard to scrape and yet are mostly constant over time. This option is mostly hassle-free as nobody is trying to block your ways all the time. But this isn’t such a reliable option. For example, LinkedIn keeps denying google permission to cache their data. It is best to opt for some other method to scrape that website.
Give Datahut a Try! Datahut specializes in web scraping services. We intend to remove all the hurdles from your way, including any such anti-scraping tools. To understand more about us and experience our services, contact us. #antiscrapingtools #webscraping

Frequently Asked Questions about anti scraping technology

What is anti scraping?

Anti-scraping tools can identify the non-genuine visitors and prevent them from acquiring data for their use. These anti-scraping techniques can be as simple as IP address detection and as complex as Javascript verification.Nov 23, 2020

What does scraping mean in technology?

Scraping, or “web scraping,” is the process of extracting large amounts of information from a website. This may involve downloading several web pages or the entire site. … Scraping can also be done automatically using web scraping software. This is the most common way to download a large number of pages from a website.Sep 22, 2011

What is scraping used for?

Web scraping is the process of collecting structured web data in an automated fashion. It’s also called web data extraction. Some of the main use cases of web scraping include price monitoring, price intelligence, news monitoring, lead generation, and market research among many others.

Leave a Reply

Your email address will not be published. Required fields are marked *