Scrapy Proxy

How To Set Up A Custom Proxy In Scrapy? – Zyte

When scraping the web at a reasonable scale, you can come across a series of problems and challenges. You may want to access a website from a specific country/region. Or maybe you want to work around anti-bot solutions. Whatever the case, to overcome these obstacles you need to use and manage proxies. In this article, I’m going to cover how to set up a custom proxy inside your Scrapy spider in an easy and straightforward way. Also, we’re going to discuss what are the best ways to solve your current and future proxy issues. You will learn how to do it yourself but you can also just use Zyte Proxy Manager (formerly Crawlera) to take care of your proxies.
Why you need smart proxies for your web scraping projects?
If you are extracting data from the web at scale, you’ve probably already figured out the answer. IP banning. The website you are targeting might not like that you are extracting data even though what you are doing is totally ethical and legal. When your scraper is banned, it can really hurt your business because the incoming data flow that you were so used to is suddenly missing. Also, sometimes websites have different information displayed based on country or region. To solve these problems we use proxies for successful requests to access the public data we need.
Setting up proxies in Scrapy
Setting up a proxy inside Scrapy is easy. There are two easy ways to use proxies with Scrapy – passing proxy info as a request parameter or implementing a custom proxy middleware.
Option 1: Via request parameters
Normally when you send a request in Scrapy you just pass the URL you are targeting and maybe a callback function. If you want to use a specific proxy for that URL you can pass it as a meta parameter, like this:
def start_requests(self):
for url in art_urls:
return Request(url=url,,
headers={“User-Agent”: “My UserAgent”},
meta={“proxy”: “})
The way it works is that inside Scrapy, there’s a middleware called HttpProxyMiddleware which takes the proxy meta parameter from the request object and sets it up correctly as the used proxy. The middleware is enabled by default so there is no need to set it up.
Option 2: Create custom middleware
Another way to utilize proxies while scraping is to actually create your own middleware. This way the solution is more modular and isolated. Essentially, what we need to do is the same thing as when passing the proxy as a meta parameter:
from import basic_auth_header
class CustomProxyMiddleware(object):
def process_request(self, request, spider):
[“proxy”] = ”
request. headers[“Proxy-Authorization”] =
basic_auth_header(“”, “”)
In the code above, we define the proxy URL and the necessary authentication info. Make sure that you also enable this middleware in the settings and put it before the HttpProxyMiddleware:
DOWNLOADER_MIDDLEWARES = {
‘stomProxyMiddleware’: 350,
‘tpProxyMiddleware’: 400, }
How to verify if your proxy is working?
To verify that you are indeed scraping using your proxy you can scrape a test site that tells you your IP address and location (like this one). If it shows the proxy address and not your computer’s actual IP it is working correctly.
Rotating proxies
Now that you know how to set up Scrapy to use a proxy you might think that you are done. Your IP banning problems are solved forever. Unfortunately not! What if the proxy we just set up gets banned as well? What if you need multiple proxies for multiple pages? Don’t worry there is a solution called IP rotation and it is key for successful scraping projects.
When you rotate a pool of IP addresses what you’re doing is essentially randomly picking one address to make the request in your scraper. If it succeeds, aka returns the proper HTML page, we can extract data and we’re happy. If it fails for some reason (IP ban, timeout error, etc… ) we can’t extract the data so we need to pick another IP address from the pool and try again. Obviously, this can be a nightmare to manage manually so we recommend using an automated solution for this.
IP rotation in Scrapy
If you want to implement IP rotation for your Scrapy spider you can install the scrapy-rotating-proxies middleware which has been created just for this.
It will take care of the rotating itself, adjusting crawling speed, and making sure that we’re using proxies that are actually alive.
After installation and enabling the middleware you just have to add your proxies that you want to use as a list to
ROTATING_PROXY_LIST = [
”,
#… ]
Also, you can customize things like ban detection methods, page retries with different proxies, etc…
Conclusion
So now you know how to set up a proxy in your Scrapy project and how to manage simple IP rotation. But if you are scaling up your scraping projects you will quickly find yourself drowned in proxy related issues. Thus, you will lose data quality and ultimately you will waste a lot of time and resources dealing with proxy problems.
For example, you will find yourself dealing with unreliable proxies, you’ll get poor success rate and poor data quality, etc… and really just get bogged down in the minor technical details that stop you from focusing on what really matters: making use of the data. How can we end this never-ending struggle? By using an already available solution that handles well all the mentioned headaches and struggles.
This is exactly why we created Zyte Proxy Manager (formerly Crawlera). Zyte Proxy Manager enables you to reliably crawl at scale, managing thousands of proxies internally, so you don’t have to. You never need to worry about rotating or swapping proxies again. Here’s how you can use Zyte Proxy Manager with Scrapy.
If you’re tired of troubleshooting proxy issues and would like to give Zyte Proxy Manager a try then signup today. It has a 14-day FREE trial!
Useful links
Scrapy proxy rotating middleware
Discussion on Github about Socks5 proxies and scrapy
Everything you need to know about Using a Proxy in Scrapy

Everything you need to know about Using a Proxy in Scrapy

A proxy server acts as a tunnel to get things done without having too much of an attention on you. It is like your gateway to the internet with a mask.
It is like a whole new level between you, who is the end user and the internet. Proxies servers are designed to provide security and privacy depending on the use case.
When using a proxy server, it channels on flow of internet traffic and gets you to the URL requested. The response to this request is also through the same tunnel and then the data that you need is provided to you.
So, if this is the use of a Proxy Server, why do you need one? Why can’t this be done normally?
Because, proxies do more that that, they act as a firewall between the website and you. They provide shared connections, clear cache data to speed requests and filter the data. It helps keep users protected from the harmful stuff on the internet by providing the highest level of privacy and Quick LinksJump straight to the section of the post you want to read:OPERATING PROXY SERVERSUSES OF A PROXY SERVERTYPES OF PROXY SERVERSWEB SCRAPINGPROXIES FROM WEB SCRAPINGHOW CAN WEBSITES DETECT WEB SCRAPING? SETTING UP PROXIESTESTING A PROXYROTATING PROXIESOPERATING PROXY SERVERSEvery device connected to the internet will have a unique IP address. This is something like how your house has a physical address, think of the same thing in the virtual world. The data that you need is requested from this address, now, when the information is processed, it is returned back to the same address.
A proxy server in layman terms is like another computer with its own IP address that can be accessed from your computer. Every request that you send first goes to the proxy server. Now this server requests on your behalf and gets the response. Once it gets the response it forwards it you, and you can access the webpage.
So, what actually happens is when the proxy server sends the data as web request, it changes the request a little, but you still will get to see what is expected. The server changes the IP address, so that the web server does not know where you are accessing the data from. It encrypts your data in such a way that it is unreadable during transit.
USES OF A PROXY SERVER
Using a Proxy Server carters a wide range of use cases for both the individual as well as the organization.
1. TO CONTROL INTERNET USAGE
Source-
Companies would often want to monitor what their employees search over the internet. They’d want to restrict the accessibility of certain sites which they think might reduce their productivity. Similarly parents might want to control what their kids access on the internet. Companies use proxy to deny access and redirect to you a page asking you to refrain from checking out the mentioned site. They can also keep track of the time you spend cyberloafing.
2. BANDWIDTH SAVINGS
Source – Paessler AG
Organizations on the whole get a better performance using a good proxy server. Proxy servers tend to save a copy of the website (cache). Therefore when you try to access a website, the proxy sever will check if the saved copy is the recent copy and then send it to you. Let us say there are multiple employees trying to access your company’s website at the same time, the proxy sever will have to access the website only once and then save the cipy locally. It will then send that information to any employee trying to access the page. This improves the performance.
3. PRIVACY
Proxy serves have the capability of hiding who sent the request. Proxies are capable of hiding who sent the original request. The destination server will not the original source of the request and helps in keeping things private.
4. SECURITY
Proxy servers allows you to encrypt the web requests such that you can keep your transactions safe. This prevents malware sites from accessing your information. Organizations can also set up a VPN using a proxy to allow employees access the internet only via the company proxy. This allows the company to control who access the data while allowing the employees to login from a remote location.
Source –
5. WEB SCRAPING USING A PROXY SERVER
Using a proxy server companies can access restricted data. It is a technique that allows its users to extract a large amount of data from the web. Data found on the web can only be viewed when online. The technique of web scraping enables us to extract large data for personal use. This data is stored as a local file in your system or as a spreadsheet.
The process of web scraping can be done either by bots or humans. Bots are capable of performing tasks a lot faster than humans. But, using bots can also cause a lot of loss in revenue for the victims. About 37. 9% of web traffic is due to bots. Distil Networks curated a list from Scraping Hub, Diffbot, and ScreenScraper on why web scraping is done and this is the result. Proxies play a crucial role when it comes to web scraping. The main advantage of using a proxy is that it allows you to hide your machine’s IP address. This way when you send requests, the target site will see the requests coming in from a proxy IP and not your original IP.
Source – Kimonolabs
TYPES OF PROXY SERVERS
There are different types of proxy servers that you can configure. Each of it addresses a different use case. You need to know what problem you are trying to solve and configure the correct proxy.
1. TRANSPARENT PROXY
A transparent proxy informs the website that the system is trying to access the website via a proxy and still pass the information to the server. Schools, libraries and businesses often use this for content filtering.
2. ANONYMOUS PROXY
An anonymous proxy is when the proxy will identify itself as one but will refrain from sending that information to the web server. This will help in preventing data theft. Companies generally pick up the customers’ location information and show the relevant business information to them. If you do not want companies to access this information of yours you can use an anonymous proxy.
3. DISTORTING PROXY
A distorting proxy passes an incorrect IP address and yet identify itself as a proxy. This is similar to an anonymous proxy, by showing a false IP, it makes it difficult for companies to identify the original location from which you are accessing the site.
4. HIGH ANONYMITY PROXY
High Anonymity proxy servers keep changing the IP address of the web server making it difficult to identify the source and destination of the traffic. It is the most secure way to read the internet.
WEB SCRAPINGWeb scraping generally happens using spiders in HTML. Spiders are language-specific classes that will define how a web page should be scraped. This includes how to perform the crawl the links and the information on how to extract structured data from these pages. In other words, Spiders allow you to define custom behavior for crawling and parsing pages for a particular site, group of sites or a group of use cases.
This is what happens in the back end. In the front end, the web scraping software that you have installed will automatically load and extract data based on the spider class, this can be multiple pages from the same website, or it can span across different pages to get the information. The information can then be downloaded in a single click.
Source – bestpaidproxies
PROXIES FROM WEB SCRAPINGIP banning is a common issue when you want to scrap the web for content. The website that you are targeting might have sensitive information that they’d not want you to access or they may simply not like you accessing the exclusive information that they have. Therefore, when they find out that you are trying to copy their data, they may end up banning your IP address.
When your IP address is banned, it will affect your business as the flow that you usually use will not work anymore. Inorder to solve these issues, you can use a proxy server in Scrapy.
HOW CAN WEBSITES DETECT WEB SCRAPING? Websites have their own methods to identify scrappers. Some install apps like honey pots that identify and block scrapers. But, these are specific cases. In general, this is how most websites identify scraping. Make sure that you do not follow this to go unidentified.
1. Unusual traffic or download rate from a single address within a short time
Humans do not perform the same tasks on and on, on a website. Any unhuman behavior can be easily identified
3. Honey pots are invisible fake links that are not visible for humans but only to a spider. When the spider crawls the links, it sets an alarm and alerts the site.
SETTING UP PROXIESSetting up a proxy in Scrapy is extremely easy. There are two ways by which you can implement this functionality.
1. Using Request Parameters
Creating a custom Middleware
1. USING REQUEST PARAMETERS
Generally you just pass an URL and target a callback function when you are using a Scrappy. But, if you are looking at using a specific proxy for a particular Url, then it is possible if you ass a meta tag. Consider this example:
def start_requests(self):
for url in art_urls:
return Request(url=url,,
headers={“User-Agent”: “My UserAgent”},
meta={“proxy”: “})
There is a middleware in the Scrapy called Proxy Middleware which passes the request object and sets it up.
2. CREATE A CUSTOM MIDDLEWARE
In case you do not want to use the middleware that is already provided, the next option that you have is to create a custom middleware. The process of passing this middleware is similar to the one above:
from import basic_auth_header
class CustomProxyMiddleware(object):
def process_request(self, request, spider):
[“proxy”] = ”
request. headers[“Proxy-Authorization”] =
basic_auth_header(“”, “”)
In the above code we pass the necessary authentication
DOWNLOADER_MIDDLEWARES = {
‘stomProxyMiddleware’: 350,
‘tpProxyMiddleware’: 400, }
TESTING A PROXYIt is important that you try the proxy before you use it. You can test it on a test site. If the Site shows you the IP address of your proxy and not the actual IP then it is TATING PROXIESIs it not possible to ban a proxy?
Well, no, you can ban a Proxy IP as well. But, fortunately there is a solution for that. The key is to rotate IPs. When you rotate a bunch of IP address, they randomly pick an address and request for the web page.
Source: Smart Proxy
If it succeeds then the page is displayed. If it is banned then another IP is picked from the bundle. Managing this manually requires a lot of effort. But, if you install a Scrapy rotating proxy then you can automate this effort.
Source: best paid proxies
FAQ’sHow do you use Crawlera? In order to use Crawlera, a few steps need to be conducted. You need to first subscribe to Crawlera with the Scrapinghub dashboard. Also with this dashboard, you need to get your API key as well. Now with this API key, you can make the can Scrapy do? Scrapy can be used to extract data with the help of API’s and also it can be used as a general-purpose web crawler. What does Scrapy mean? Scrapy is a free and open-source web crawling process that is written in Python language. Scrapy can be used to extract data with the help of API’s and also it can be used as a general-purpose web crawler. What are proxies and how do you use them? Proxies are a great tool to access online information without any risks attached. You can use them to access crucial data online and more Can I crawl any website? Yes, you can crawl any websitesWhat is a user agent in Scrapy? User-agent refers to a string that browsers use to identify itself to the webserver. The string is sent on every HTTP request in the request header. In Scrapy, it is identified as; Scrapy/ (+)Does Google allow scraping? It isn’t legal neither illegal to scrape on GoogleWhat is Scrapy used for? Scrapy can be used to extract data with the help of API’s and also it can be used as a general-purpose web crawler. How does proxy work? A proxy server acts as a middleman between you and the internet. It’s a server that separates end users from the websites they free proxies safe? No, free proxies are not safeCONCLUSION
Web Scraping tools pave the way for easy access to this data. Proxies play a vital role when it comes to web scraping. Proxy servers work as a middleman between your web scraping tool and the website. The HTTP request to any website will pass through the proxy server first and the proxy server will pass on the request to the target website. The main reason why you need a middleman proxy server is to hide your IP address from all websites so that even in the worst case you will not get blacklisted.
Now that you know what Web Scraping is and how to use a proxy effectively in this process. The next thing that you should be looking at is where to purchase the proxy from.
Limeproxies offer dedicated proxies that will help you perform web scraping. It offers all of the above mentioned benefits in addition to:
1. 1 Gbps speed with 100+ subnets,
99. 9% guaranteed uptime
Can use 25 IP addresses at a time
Guaranteed fast response time to any tech support issues that may arise.
With our premium private proxies, you’ll never have to share an IP address with anyone else because we use dedicated IPs.
Ability to change your proxy IP address on-demand, anytime.
Fully automated control panel
24/7 customer service.
With Guaranteed Uptime, Support and Services you can be assured that your work will go as scheduled.
About the authorRachael ChapmanA Complete Gamer and a Tech Geek. Brings out all her thoughts and Love in Writing Techie to get started? Try it free for 3 days
How To Set Up A Custom Proxy In Scrapy? - Zyte

How To Set Up A Custom Proxy In Scrapy? – Zyte

When scraping the web at a reasonable scale, you can come across a series of problems and challenges. You may want to access a website from a specific country/region. Or maybe you want to work around anti-bot solutions. Whatever the case, to overcome these obstacles you need to use and manage proxies. In this article, I’m going to cover how to set up a custom proxy inside your Scrapy spider in an easy and straightforward way. Also, we’re going to discuss what are the best ways to solve your current and future proxy issues. You will learn how to do it yourself but you can also just use Zyte Proxy Manager (formerly Crawlera) to take care of your proxies.
Why you need smart proxies for your web scraping projects?
If you are extracting data from the web at scale, you’ve probably already figured out the answer. IP banning. The website you are targeting might not like that you are extracting data even though what you are doing is totally ethical and legal. When your scraper is banned, it can really hurt your business because the incoming data flow that you were so used to is suddenly missing. Also, sometimes websites have different information displayed based on country or region. To solve these problems we use proxies for successful requests to access the public data we need.
Setting up proxies in Scrapy
Setting up a proxy inside Scrapy is easy. There are two easy ways to use proxies with Scrapy – passing proxy info as a request parameter or implementing a custom proxy middleware.
Option 1: Via request parameters
Normally when you send a request in Scrapy you just pass the URL you are targeting and maybe a callback function. If you want to use a specific proxy for that URL you can pass it as a meta parameter, like this:
def start_requests(self):
for url in art_urls:
return Request(url=url,,
headers={“User-Agent”: “My UserAgent”},
meta={“proxy”: “})
The way it works is that inside Scrapy, there’s a middleware called HttpProxyMiddleware which takes the proxy meta parameter from the request object and sets it up correctly as the used proxy. The middleware is enabled by default so there is no need to set it up.
Option 2: Create custom middleware
Another way to utilize proxies while scraping is to actually create your own middleware. This way the solution is more modular and isolated. Essentially, what we need to do is the same thing as when passing the proxy as a meta parameter:
from import basic_auth_header
class CustomProxyMiddleware(object):
def process_request(self, request, spider):
[“proxy”] = ”
request. headers[“Proxy-Authorization”] =
basic_auth_header(“”, “”)
In the code above, we define the proxy URL and the necessary authentication info. Make sure that you also enable this middleware in the settings and put it before the HttpProxyMiddleware:
DOWNLOADER_MIDDLEWARES = {
‘stomProxyMiddleware’: 350,
‘tpProxyMiddleware’: 400, }
How to verify if your proxy is working?
To verify that you are indeed scraping using your proxy you can scrape a test site that tells you your IP address and location (like this one). If it shows the proxy address and not your computer’s actual IP it is working correctly.
Rotating proxies
Now that you know how to set up Scrapy to use a proxy you might think that you are done. Your IP banning problems are solved forever. Unfortunately not! What if the proxy we just set up gets banned as well? What if you need multiple proxies for multiple pages? Don’t worry there is a solution called IP rotation and it is key for successful scraping projects.
When you rotate a pool of IP addresses what you’re doing is essentially randomly picking one address to make the request in your scraper. If it succeeds, aka returns the proper HTML page, we can extract data and we’re happy. If it fails for some reason (IP ban, timeout error, etc… ) we can’t extract the data so we need to pick another IP address from the pool and try again. Obviously, this can be a nightmare to manage manually so we recommend using an automated solution for this.
IP rotation in Scrapy
If you want to implement IP rotation for your Scrapy spider you can install the scrapy-rotating-proxies middleware which has been created just for this.
It will take care of the rotating itself, adjusting crawling speed, and making sure that we’re using proxies that are actually alive.
After installation and enabling the middleware you just have to add your proxies that you want to use as a list to
ROTATING_PROXY_LIST = [
”,
#… ]
Also, you can customize things like ban detection methods, page retries with different proxies, etc…
Conclusion
So now you know how to set up a proxy in your Scrapy project and how to manage simple IP rotation. But if you are scaling up your scraping projects you will quickly find yourself drowned in proxy related issues. Thus, you will lose data quality and ultimately you will waste a lot of time and resources dealing with proxy problems.
For example, you will find yourself dealing with unreliable proxies, you’ll get poor success rate and poor data quality, etc… and really just get bogged down in the minor technical details that stop you from focusing on what really matters: making use of the data. How can we end this never-ending struggle? By using an already available solution that handles well all the mentioned headaches and struggles.
This is exactly why we created Zyte Proxy Manager (formerly Crawlera). Zyte Proxy Manager enables you to reliably crawl at scale, managing thousands of proxies internally, so you don’t have to. You never need to worry about rotating or swapping proxies again. Here’s how you can use Zyte Proxy Manager with Scrapy.
If you’re tired of troubleshooting proxy issues and would like to give Zyte Proxy Manager a try then signup today. It has a 14-day FREE trial!
Useful links
Scrapy proxy rotating middleware
Discussion on Github about Socks5 proxies and scrapy

Frequently Asked Questions about scrapy proxy

What is proxy in Scrapy?

When using a proxy server, it channels on flow of internet traffic and gets you to the URL requested. … The response to this request is also through the same tunnel and then the data that you need is provided to you.Dec 28, 2019

How do I use a proxy in Scrapy Python?

Setting up proxies in Scrapydef start_requests(self):for url in self. start_urls:return Request(url=url, callback=self. parse,headers={“User-Agent”: “My UserAgent”},meta={“proxy”: “http://192.168.1.1:8050”})Aug 8, 2019

Does Scrapy use proxies?

Does Scrapy work with HTTP proxies? Yes. Support for HTTP proxies is provided (since Scrapy 0.8) through the HTTP Proxy downloader middleware.Dec 16, 2013

Leave a Reply

Your email address will not be published. Required fields are marked *