Scrapebox Backlinks

Mass Backlink Checker – ScrapeBox

The ScrapeBox Backlink Checker Addon allows you to bulk check the number of backlinks a URL or Domains has and also allows you to download the top 1, 000 backlinks for each URL which is provided by the Mozscape API.
This is great for doing backlink audits for your website, or doing competitor analysis to view the URL’s where your competitors have obtained their backlinks from.
To use this addon you will need to register for a free or paid Mozscape API key to add to the account setup in the addon. URL’s to check can be loaded in to the addon from a text file, or transferred automatically from ScrapeBox.
You can choose to check the backlinks to the URL or to the Domain, and backlinks for all domains can be saved to a single text file or you can save the backlinks and corresponding domain to individual files for use in Excel.
So if you need to find out where your competitors are getting their links from, this is a great addon to fetch those backlink URL’s so you can see how the links were obtained and if you can get links in the same locations as could even load the URL’s directly in to the Comment Poster and try to obtain links on compatible platforms automatically.
There’s no limit to the amount of domains you can check, if you have a free API account then the tool throttles the requests to one check every 10 seconds per free API key added to the tool. If you have more than one API key then checking time is reduced.
ScrapeBox Guide - White Hat SEO | Powered by Search

ScrapeBox Guide – White Hat SEO | Powered by Search

Updated: September 10, 2019
B2B SaaS Marketing Tips
Sign up for updates delivered directly to your inbox.
The Forbidden S-Word Of SEO: ScrapeBox
So you and your company (yes – even you one-[wo]man-armies out there) are brand new to the world of search engine optimization. Maybe your offline business has been suffering due to the economy or a myriad of other factors, however you are a warrior – a trooper – and you will not take no for an answer. You know that the internet is a gold mine, but you have trouble tapping into this unending fountain of visitors. You manage to convince yourself that an online game plan is required. Over the course of a few days, you feverishly search the internet for tools and software that can help your endeavors.
After much frustration, panic, and tears, you finally come across a forum that is discussing SEO tactics and your eyes are drawn to the banter back and forth over a particular piece of software, it is called ScrapeBox. In your desire to make your business an online sensation, you register for an account on this forum and introduce yourself as a newcomer to the search engine marketing world. People respond to you and welcome you aboard and all is going well….. but then you make a very serious mistake…. an oh-so-terrible mistake…. You proceed to ask your very first question. You want input from the “SEO” community on whether ScrapeBox will help your website rank in the search engines. The feedback you receive from the majority of self proclaimed ethical webmasters and search engine marketers alike is most accurately described in the following photo:
While this is amusing to read about, it also carries a lot of truth behind it. Time and time again we see newcomers to the SEO community being flamed off the discussion boards for questions deemed misguided, misinformed, black hat, grey hat, upside down hat, you name it. If you haven’t seen this behavior – pay a visit to some of the more well known forums surrounding search engine optimization tips and techniques. The question begs though….
Is ScrapeBox A White Hat SEO Tool?
I will pose an answer to this question with another question. Is the sky blue? At this point you either think I am losing it, or you clearly understand where this is headed – and kudos to you if the latter is true! The sky is in fact not blue or red or even orange for that matter. Its color depends upon the time of day, whether we are color blind, if we have sunglasses on, or if we can even see at all! Likewise with ScrapeBox or any other tool, software can be abused and thrown around like a plague on the human race, which sadly is the case more often than not. On the flip-side of that coin however is an overwhelming amount of power that can help speed up daily tasks and production for even the purest of the pure, hardcore white-hat junkies. ScrapeBox is a piece of software that costs $97. This software collects (or scrapes obviously) information off of the internet. Some of its features include:
Harvesting of proxies
Being able to create a sitemap of a website (did you know this? )
Ability to make a RSS feed (did you know this? )
Collecting keyword ideas
Collecting websites based on a footprint (handy)
Blog commenting en-masse (not a wise idea)
Pinging of URLs
RSS submission
and more…
The Real ScrapeBox
While ScrapeBox has gotten a bad horrible reputation due to the unceasing amount of spam it has enabled to be carried out (alongside Xrumer), it has a fair amount of legitimate use that can greatly speed up your day to day workflow. Let’s take a look.
1: How to Find Long Tail Keywords
2: Blogger Outreach Guidelines
3: Backlink Checker
4: WHOIS Checker
5: TDNAM Addon – GoDaddy Auctions
6: Sitemap Scraper
7: Outbound Link Checker
8: Bulk URL Shortener
9: Malware & Phishing Finder
10: Rapid Indexer
11: Page Scanner / Categorizer
12: Link Extractor
13: Competition Finder
14: Cache Extractor
15: Fake Page Rank Checker
16: Duplicate Remover
17: Domain Name Checker
18: Meta Scraper
19: Domain Resolver
1: How To Find Long Tail Keywords
If you are in need of generating industry related ideas for your marketing plan and content strategy, you can easily accomplish this with ScrapeBox in a matter of minutes (compared to hours of work the normal manual way). Here’s how it’s done in minutes:
For this example, let’s use the SEO industry. In the picture below, I started with 3 key-phrases.
I entered these few key-phrases into the text area on the left and then clicked the “scrape” button along the bottom.
It came back with a few hundred results.
I then copy/pasted these new additions back into the main text area on the left and re-ran the “scrape” a second time.
The final result brought back a ton of key-phrases, that are possible content ideas and niche markets to go after.
During this time I stretched for the win.
Harvesting material is a good way to get an overview of the market, however you won’t get much accomplished unless you actually do something with that data. The best way to launch your website to the top of the SERPs in 2013 is by forming relationships with those in the industry, by cultivating useful information, and by having an authority status in your select market. Outreach to influential people is one of the best ways to get this done and you can accomplish that goal with ScrapeBox. Let’s take a look. Some people still search the following set of phrases manually in order to find link prospects:
Keyword guest blogger wanted
Keyword guest writer
Keyword guest blog post writer
Keyword “write for us” OR “write for me”
Keyword “Submit a blog post”
Keyword “Become a contributor”
Keyword “guest blogger”
Keyword “Add blog post”
Keyword “guest post”
Keyword “Write for us”
Keyword submit blog post
Keyword “guest column”
Keyword “contributing author”
Keyword “Submit post”
Keyword “submit one guest post”
Keyword “write for us”
Keyword “Suggest a guest post”
Keyword “Send a guest post”
Keyword “contributing writer”
Keyword “Submit blog post”
Keyword inurl:contributors
Keyword “guest article OR post”
Keyword add blog post
Keyword “submit a guest post”
Keyword “Become an author”
Keyword submit post
Keyword “submit your own guest post”
Keyword “Contribute to our site”
Keyword magazines
Keyword “Submit an article”
Keyword “Add a blog post”
Keyword “Submit a guest post”
Keyword “Guest bloggers wanted”
Keyword “submit your guest post”
Keyword “guest article”
Keyword inurl:guest*posts
Keyword Become guest writer
Keyword inurl:guest*blogger
Keyword “become a contributor” OR “contribute to this site”
Now I don’t know about you, but having to search each one of these manually and then record the results into an excel spreadsheet would push my sanity level by a small margin. I prefer to have a solid foundation and workflow and then attempt to automate the tasks that can be automated. With ScrapeBox, you can too – let’s take a look. The very first task on your list should be to decide on your market’s topic. If you don’t know the main focus of your website, you should probably re-visit that BEFORE you start this step. Once you have your market focus down willy-nilly, it is now time to enter that focus into the software. This is how it’s done: There are a variety of tools out there that allow you to find potential link prospects. I won’t name the obvious ones because they are….. obviously obvious! The purpose of this post is to show the white-hat side of this software, because you must always try to find the good in everything going on, right? The only issue with using ScrapeBox as a means of finding people to form a relationship with is the fact that proxies will be needed. Like other software which tracks your rankings in the SERPs or goes out around the web to find potential prospects, SB can either be used with or without proxies. As you may be aware, automating requests to the search engines is not the best course of action and the last thing you want to happen is having your IP banned due to the large number of requests being sent out. You can specify settings to reduce the amount of “stuff” being done at any given time though. For this reason, a reliable proxy service will be the order of the day if you decide to go this route. I like to do some manual work and search out really high prospects on my own, however some people prefer the use of proxies for gathering large scale data. It really depends on the type of project you are doing and the amount of work involved and your standpoint on the issue (yes, some will argue that using proxy services is not white hat, while others will dismiss the fact of proxies being black hat as complete nonsense…. which type are you? ).
Everyone loves backlinks. It is what the web is made up of. Being involved in the search engine marketing industry will turn you into a link junkie/lover in no time at all. Love it or hate it, links have always and will for the foreseeable future, play a major role in the ranking power of any website. Why is this true? The reason is that the internet is made up entirely of links – without links, there would not be any internet as we know it today. You get from site A to site B via a hyperlink and until that changes, backlinks will be here for some time. Backlinks can help and they can hurt. When they do hurt, it is up to you to find them and remove them or get them disavowed. Perhaps the photo below reminds you of those late nights you spent trying to fix your backlink profile:Due to this fact, backlink checkers have risen left and right over the years. Some are free, some are paid, some are better than others (as with anything in life) and others downright suck. As you may be guessing, I am now going to let you know how you utilize ScrapeBox to be your own free backlink checker. There are no monthly fees involved in this, the data is simple, and you have a limit of up to 1, 000 links returned in the report – however it is free. First, enter in your URL inside the left text area. Copy it and then paste it into the right hand side, like so: Next, move your mouse over the “addons” tab along the top of the screen and select it. You will get something similar to the screenshot below:
If your addons tab looks different, it is probably because you have not installed any addons yet. All you have to do is click “Show available addons” and then install them one by one. It really is that easy! Once you click the backlink checker option underneath the addons tab, you will be presented with a screen like the one below:
All that is required for you to do now is select the “Load from ScrapeBox Harvester” option and then click START. When the results are done, you will have the option of downloading a file with up to 1, 000 backlinks. Not too shabby for being free…
I WHOIS, you WHOIS, we all WHOIS for the “biz”! Seriously though, if you have a bunch of websites you want to check the WHOIS out on – ScrapeBox is a nice tool to get the job done. Of course you have your browser extensions and plugins, as well as manually looking up one domain at a time. The thing about this tool is that it is simple, but did you know that you can also check bulk URLs at the same time? Check out the photo below:
When you select the WHOIS option, you will be presented with a screen like the one below. All you have to do at this point is click the “Load” button and then “Load from ScrapeBox harvester”. Once your URLs are loaded, it’s as simple as pressing “Start”.
A word to note here: It is best if the proxies you are using for this addon module are of the SOCKS extension. A lot of the proxies you may use online are not SOCKS and you may run into some errors, so keep that in mind.
Once your WHOIS information has completed, you will get a screen like the one below:
As you may have noticed the information brought back is rather simple, however it can be really useful if you have a bunch of URLs to check at the same time. Have any of you ever had to check the WHOIS for multiple sites at one time and if so, why?
Do we have any domain name junkies in the house? The TDNAM addon allows you scrape GoDaddy Auctions for domains that are ending with 24 hours and it will let you search that. As per usual, begin by clicking the addons tab and then installing the TDNAM addon if it is not already installed. The way it works is quite simple:
You enter your keyword in order to start the search (try to start broad and work your way down the narrower path).
You select the type of TLD you want to find (top level domain).
You click……. Start.
Done.
Take a look at the photo below to see what I mean:
From here, you can right click on any of the listings and then proceed to GoDaddy’s website for more information. It’s great to get an overview of what is going on in the domain name market and you can get through a lot of data rather quickly.
As you can see, the items displayed to you include:
Traffic
Price
End Time
Domain Age
Export Options
How many of you were aware that ScrapeBox can be used hand-in-hand with GoDaddy?
The sitemap scraper is a useful tool if you want to churn back the URLs from your website or from your competitors. As always, please install the addon from the available list of addons. Now what this addon will do, is load a valid sitemap from a domain, and then scrape all the URLs out of that sitemap.
The first thing you should do is enter in your valid sitemap file and then copy/paste it over to the harvester section (just like we did in the previous examples). Take a look at the picture below for an idea of what this will look like:
It should also be noted that there are options for “Deep Crawl”. This allows the tool to go out to each link found and then also pull in more internal links from those originally found. Simple isn’t it?
Just as you may have imagined, the outbound link checker is a useful addition to the software in that it allows you to quickly glance at the amount of links leaving a particular website. It also shows internal links as well. As always, please make sure that this module has been installed from the list of available addons (found along the top bar of the program’s interface). Once you have this addon installed, it’s time to get to work.
For this example, I chose 3 random URLs and entered them into the text area on the left hand side.
I then proceeded to copy those URLs over to the text harvester area on the right hand side.
After this, I navigated up to the addon tab and selected the Outbound Link Checker option.
Take a look at the screenshot below:
After I loaded in the URLs and click Start, I was presented with the following screen:
Another nice feature is the ability to filter out results based upon your own needs. In addition to that, you have the option of removing any error entries. This would be useful if you needed to come back with a list of websites that had more than “X” number of external links. See the photo below:
As far as outbound links go, that’s about it for this addon!
8: Bulk URL Shortener
If you have used Twitter or Bitly in the past – then you are definitely familiar with the process of shortening a URL in order to make it fit within a specified number of characters. The problem with many services is that you can only shorten one URL at a time. What if you had to shorten 95 of them? It would get a bit tedious wouldn’t it? Of course it would. So that’s where the Bulk URL Shortener comes into play. This addon allows you to:
Type in a list of URLs
Uses URL shortening services and get you new liks
Like we have been doing (if you haven’t gotten the pattern by now), you must make sure that the addon is installed via the available addons selection, under the Addons tab.
I have some trouble getting the URL shortener to work when entering a single URL, but the tool works fine when uploading a text file list of URLs, such as the photo below:
Either way, that is how you go about getting bulk URLs in tiny form – in no time at all!
I hate love, you hate love, we all hate love, phishing bait! Who knew that malware could be your friend? With ScrapeBox, we can turn the most evil of evil’s into an inbound link opportunity by playing the Good Samaritan. Not all webmasters are the savvy types and many of them do not even use Google Webmaster Tools or for that matter, some don’t even check their website more than once every 3 months. As is the case with internet vulnerabilities, malware and other exploits make their way around the net like an out of control pest. Why not help out others who are less fortunate and inform them? You may just get a link out of the process because they will be so grateful.
This addon connects to a Google database and checks the sites for any Malware currently or from days gone by. As the process is running, you are able to glance very quickly and see which ones are the offenders. Note that sometimes errors will occur for various reasons. For this example, I grabbed a list of pinging URLs. The screenshot below shows the system in action and what you can expect to see:
Pretty simple isn’t it? Now as far as link opportunities are concerned, this takes a bit of skill, but it could be worth the effort depending on the website you have found that is infected. Here are a few steps to take:
Run a list of URLs through the Malware Checker addon
Export the URLs and check them in OpenSiteExplorer for domain authority
Sort the URL list by descending domain authority
Use a browser WHOIS plugin or the built in WHOIS scraper of this tool in order to gather the contact information for each URL/webmaster
Here’s the hard part – you need to reach out to the webmaster and let them know that their site is hosting malware or some other exploit (do not visit the website as it may infect your computer).
Do not ask for a link at this point – wait for them to get back to you and for the issue to be resolved.
Once you have a dialogue with the owner, feel free to form a partnership somehow.
There is no set guideline on how to use this for a backlink opportunity. You have to be creative here as it will be different for every industry you are in.
When you want your information to get shared and indexed quickly, Google+ is a great way to get the job done. If for SOME UNKNOWN REASON you cannot use G+ for this venture, you can always resort to using an indexer service. With ScrapeBox, you have the option of utilizing a pre made list of indexing websites that are sure to get your pages noticed. Here is how you do it:
As you can see in the picture, there is a nice list pre-built for you. This is easy to find. All you have to do is:
Navigate to the addons tab on the main screen of the software
Select the “Show Available Addons” option
Browse to the Rapid Indexer and highlight it.
Download the list from the description section.
Once you have this accomplished (within a minute), you will now want to load up the actual addon itself. As always, make sure it is installed first!
On the addon screen, you have the option of loading up a bunch of URLs you own, alonside the list of indexer services. Note that the limit is roughly around 1, 000, 000 – yes that is 1 million total. So if you have 100, 000 indexer sites and 10 URLs that you own…. well you do the math. Personally though, the average for any normal white-hat webmaster is just a small select few URLs that they own, mostly one or two – along with a few hundred indexer sites – still though, a G+ is an awesome way to get the job done as well. There is also the option to export the list. This isn’t really needed for your own personal use, unless you were planning to do some reporting on the matter.
This is a neat feature that many may not be aware of. As with everything else, the power of ScrapeBox is in the addons. Like usual, install it and once that is ready – launch it!
Now, what the page scanner does – is that it lets you analyze the HTML source code of a particular URL and then categorize that URL based off of your own custom footprints…. very cool. Think of the possibilities here… Let’s dig deeper. Below you see a screenshot of the addon window.
The very first thing you want to do is import your list of URLs to scan. For this example, I will use a well known WordPress blog (added through the “Load urls from” button above).
Next, you want to edit your own custom footprint (the edit footprints button above). That will look something like the following screen:
Once you have your footprints done and your URLs ready to go, you will want to begin the process of actually scanning the pages. This is how that will look:
With that all setup, you are now ready to begin and start categorizing your websites. Think really hard about how you could use this to your advantage…..
For the “xxxx-th” time: again if you don’t have this addon installed, you will want to hover over the addons tab along the top bar of the software, and then select “show available addons” – followed by installing the Link Extractor module.
When the module appears on the screen, you see some options available to you along the bottom of the addon window, such as seen below:
Now what these options do are quite simple, but quite powerful. Here is a quick synopsis:
Internal: Links from the individual domain
External: Links that link to other domains
Both: Internal & external together
So, once you load up your URL list your screen will now appear like so:
Once your reporting is complete, you can then export the results as you see fit. Duplicates are removed automatically which is really nice. What you do with this list is where the real power is.
Everyone wants to find out what their competition is doing…………. Of course I’m right. You have the ability to do some competitor research with ScrapeBox. It’s by no means a be-all-end-all kind of research, but it IS there so you might as well take a look at it.
What this does, is that it pulls the number of results that come up in Google for a particular keyword. Everyone knows this number is not accurate (especially when you browse to the end pages of the SERPs and find out that the real number is actually a lot smaller), however for a general overview of the landscape, it’s a great way to become familiar with the industry you are tackling. To start, you want to:
Enter the keywords you want to search for (this can be done with a text list)
Click Start!
See the picture below:
With the results finished, you will have a list of the results returned for every keyword. Do other tools do this job? Yes they do. Now you have another option should one of your tools no longer work.
Everyone likes to know when their pages were last cached in Google’s database right? All repeat after me….. YEEESSSS. Ok great! So how would like it if you knew when all of your pages were last cached? You could export the results and save them into an Excel spreadsheet (or OpenOffice) – and sort the data to see which pages on your site were having issues being cached lately?
You’ll need the addon installed obviously, so once you install it from within Scrapebox – open it up and you will see a window like so:
I loaded up a text file with a couple of websites, and this is what appears after the Cache Extractor completes its work:
With the ability to export the results as:
CSV
Excel
TXT
…. your reporting options are great!
With this data available, you are now able to focus on the parts of your website that seem to be slow in getting regularly cached.
When considering the fact that your outreach campaigns take a lot of time to manage – and that the value/message of your contact with random website owners has to be right on target, the last thing you need happening is to get burned on the fact that the pagerank of “said” site is a fake. Granted I could really care less about PR in today’s SEO market (compared with 2003) – however it is still a general rule of thumb for a website’s showing in the industry. I personally use other metrics for judging a site’s worth, but PR still has to be considered into the mix for complete-ness sake.
Anyways, here is how you look at page rank issues with ScrapeBox:
As most addons go with this tool – you can load up your information from the included options – I always use a text file myself, but each to their own. With your URLs loaded up, your screen will now appear like so:
All that is required now is to click “Start”. Of course the next screen will now look like:
The good news is that both Powered By Search and The Weather Network are who they say they are – isn’t this beautiful? So the next time you need to check if the Pagerank is being faked, spoofed, goofed, or what not – you can fire up bulk checking abilities through ScrapeBox.
I don’t think there is a single ethical person out there in our world who likes duplicate content – emphasis on honest/ethical. Have no fear, ScrapeBox is here! …. and here you thought that this tool was meant for spamming duplicate garbage – no-no my white-hat friend, this tool is just the opposite of that. Want to know how? Read on!
The first plan of action is of course to sit down and think about how you would use this addon. What do you normally undertake throughout your working day, that involves removing or stripping duplicate “stuff” – so you are left with simply original materials? Ask yourself this question and think about it. Tools do absolutely nothing unless they are used properly and in the right context.
For the sake of this example, I will use two example text files. They will be:
Colors
Numbers
So let’s look at how you would introduce these files into the program. Take a peek below:
First, you will want to select the sources of files to merge. In order to make this work as it should, the tool requires that the data all be in one location to begin with. In our example case, I am using two text files.
Once you select both text files from the windows explorer window that pops up, you then want to click the source button here:
This button is the source location and is where the merged files will appear when joined into a new file. When you click the source button, you will get a window:
Asking you where you want to save the file
Asking for the name of the merged file export
In my case, I called this file “combo”. Now we select the final source location for the output of our file, AFTER we remove either duplicate URLs or duplicate domains. Take a look at the screenshot below:
When you click the button for the source location (for the final file) – it will ask you what you want to call it & where to save it.
Here’s a handy tip to note – you are not limited to just URLs, you can prune email addresses, etc… oh the possibilities!
The domain name lookup feature is very useful and it is much easier than typing one name after another into GoDaddy’s URL finder (for 40 minutes) – because we all know that every domain we ever think of has already been registered. Reminds me of the time I literally typed a bunch of random letters into an email registration form on GMAIL and it told me that it had already been taken….
So what is the point of this tool? Well, just like the name sounds – it allows you to search for available domains, domains that can be registered – you name it – it’s quick, it’s easy – and it works. Let’s see what the process involves:
Unlike other sections in this guide – the domain checker is INSIDE the keyword scraper and is not accessed via the addons tab. Of course there had to be one trap in all of this!
Once you are inside the keyword scraper, you will want to enter in your phrases like so:
You enter in your phrases and click “Scrape” along the bottom right side there. When the results are returned, you should get something like so:
Really simple isn’t it? Next, select the domain button and you will be presented with the following:
As you can see, it really is straightforward. Of course, exact match domains are not worth your time nor is squatting domains, but this is a great way to check a list of ideas very quickly without spending a heap of time typing in one thought after another. Time saved is time earned is it not?
Are you a data junkie? How much do you love looking at page titles, descriptions, and even keywords? If this sounds like something that makes you excited – keep your pants on because ScrapeBox can handle that as well. The way this works is very straightforward.
The first thing you do is plug in a keyword to harvest your URLs from.
After your URLs are harvested, you then hover your mouse over the “Grab” tab on the right hand side – and select the “meta info from harvested URL list”.
Your screen should now look like the following:
Not very intimidating is it? Can you guess what the next stay may be? If you guessed pressing the start button – you would be a genius! For completeness sake, here is what your screen should look like once things are rolling.
Last but certainly not least is the ability to check domains for their IP and country of origin, otherwise known as domain resolving or IP resolving. While this probably would not be used daily, it is still a handy feature to have available. The first step in this process is to fire up the proper addon by heading to the addons tab at the top of the tool – then selecting Domain Resolver. If the addon is not showing up, you need to install it from the list of available addons.
As you can see from the screenshot below, you have the ability to either load a pre-saved list of URLs or you can manually enter in domains by clicking “Add Entries”.
Once you have that finished, all you have to do is either tick the “try to resolve location” option or simply click Resolve to begin the process. When all is said and done, your results should look similar to the following:
That is how you resolve IPs and that is how you use ScrapeBox! While there are more uses for Scrapebox, this list is a pretty good summary of all the good you can do with the tool. As with anything in life, it can be used for both good and bad. So the next time someone tells you that ScrapeBox is nothing but a black hat tool – you can refer them to this post for the win.
I’d love to hear your feedback on this. Do you see yourself using this tool for any of your daily SEO tasks or have you used it in the past for any of the techniques mentioned here? If so – what was the reason you chose to use ScrapeBox over other sets of tools?
Need more help with Scrapebox? Still not sure how to grow your business with SEO? Learn more by booking a free 25 minute marketing assessment with us.
Advanced ScrapeBox Link Building Guide - QuickSprout

Advanced ScrapeBox Link Building Guide – QuickSprout

Utter the word “ScrapeBox” around a group of white hat SEOs and you’ll notice a sea of icy glares pointing in your direction. At some level, this ire is understandable: most of those annoying, spammy blog comments you see in your WordPress Akismet spam folder likely stemmed from ScrapeBox. But like any tool, ScrapeBox is all about how you use it. In fact, many white hat SEO agencies consider the software one of their secret weapons. And in this chapter I’m going to teach you how to use ScrapeBox for good…not evil.
ScrapeBox 101
For those of you new to this tool, ScrapeBox essentially does two things: scrapes search engine results and posts automatic blog comments. We’re going to ignore blog commenting feature because that’s a spammy tactic that doesn’t work. Don’t be fooled by its simplicity: ScrapeBox is very powerful. You can easily streamline dozens of monotonous white hat link building processes with this tool.
But before we get into that, let me give you a quick primer on how the tool works.
There are 4 boxes in the ScrapeBox user interface. Here’s what they do:
We’re going to ignore the bottom right corner as this is only used for automatically posting blog comments.
There’s one other important area to point out: manage lists.
This is where you can easily sort and filter the results that ScrapeBox finds for you.
How to Harvest Results
Let’s start with the “Harvester” area.
There are two important sections here:
“Footprint”
And “Keywords”
The footprint is what you include if you want to look for something that tends to appear on certain sites. For example, “Powered by WordPress” is a common footprint used to find WordPress blogs.
Let’s say you wanted to find websites that have pages about nutrition. First, you’d put in the footprint field.
And you’d include any keywords that you want to combine with the footprint. For example, if you enter the keyword “weight loss, ” ScrapeBox would automatically search for: weight loss.
You can add hundreds of keywords and ScrapeBox will automatically combine them with your footprint. When you’re ready to scrape, head down to the search engine and proxy settings.
And choose which search engines you want to use and how many results you want to find. I usually stick to Google and scrape 500-1000 results (after about 200 results, most of the results that you get are either irrelevant or from sites without much authority).
When you have a few search strings set up, click “Start Harvesting”:
You’ll now have a list of URLs in your “URL’s Harvested” area:
Checking PR
To get the most from your scraped list, you should check the PR of each page that you found. Under “Manage Lists”, choose “Check PageRank”:
Choose “Check URL PageRank”.
Now you can easily sort the pages by PR.
And delete pages that fall below a threshold. Let’s say you don’t want pages with a PR below 3. Scroll until you see pages with a PR2.
Click and scroll to highlight those results (you can also hold shift and use the direction buttons on your keyboard to select):
Right click and choose “Remove selected URL from list”
Filtering Your Results
If you scrape from multiple search engines you’ll probably get a few duplicate results in your list. You can easily remove them from your list by clicking the “Remove/Filter” button.
And choose “Remove Duplicate URLs. ”
If you don’t want the same domain showing up in your results, you can delete duplicate domains by choosing “Remove Duplicate Domains” from the Remove/Filter options:
Now you have a clean list sorted by PR. You can export that information to Excel or a text file by clicking the “Export URL List” button. And choosing the export option that works best for you (I personally like Excel).
Using Proxies
If you’re going to be using ScrapeBox on a regular basis, proxies are a must. If you scrape from your personal IP regularly, Google will likely ban it. Meaning no more Google searches. Fortunately, you can find free, working public proxies fairly easily.
And you don’t need any technical skills to set them up.
Using ScrapeBox’s Built In Service
ScrapeBox has a cool feature that actually finds and adds free proxies for you.
Head over to the “Select Engines and Proxies” box. Hit “Manage”:
In the next window, choose “Harvest Proxies. ” Choose all of the supported sources. Click “Start. ”
Hit “Apply”.
It’s important to test the proxies before using them. If you use non-functional proxies, ScrapeBox simply won’t work. Hit the “Test Proxies” button.
Choose “Test all proxies. ”
Wait for ScrapeBox to test the proxies (it can take a while). When it’s finished you’ll see something like this:
Hit the filter button and choose “Keep Google Proxies. ”
Hit the save button and choose “Save selected proxies to ScrapeBox. ”
This will save the working, Google-approved proxies.
Now that you have a handle on how it works it’s time to use ScrapeBox to help you build incredible backlinks.
Resource Page Link Building
Resource page link building is one of the most under-utilized white hat link building strategies on the planet. Where else can you find pages that exist for the sole purpose of linking out to other sites? However, most people shy away from this strategy because it’s extremely time consuming to find resource pages, hunt for contact information and reach out to webmasters. Fortunately, you can dramatically streamline the resource page link building process with ScrapeBox.
First, enter one of these tested footprints into ScrapeBox:
intitle:resources
inurl:resources
inurl:links
In conjunction with niche-related keywords.
And hit “Start Harvesting. ”
Sort your pages by PR to focus on the highest-value targets.
Now export your list, check for broken links, or just email site owners and beg for a link!
Competitor Backlink Analysis
There’s nothing better than reverse engineering your competition. It’s one of the only ways to quickly find an incredible list of high-value, niche related sites to get links from. While OSE, Majestic and Ahrefs are fantastic tools, they’re hard to use for sites with thousands of links. Enter ScrapeBox.
Open ScrapeBox and click Addons? Show available addons.
Choose ScrapeBox Backlink Checker 2:
And click “Install Addon. ”
For the addon to work, you need to have your competitor’s homepage in the harvester results area. To do this, just enter the site’s name:
Set the results to 10.
And scrape the results.
Delete any pages that you’re not interested in grabbing backlink information from.
Go back to the Addon menu and select the Backlink checker addon.
Click “Load URL List. ” Choose “Load from ScrapeBox Harvester. ”
Hit “Start. ”
When it’s done, choose “Download Backlinks. ”
And save the file as a file.
Close the Backlink checker and head back to the ScrapeBox main menu. Under “Manage Lists” choose “Import URL List. ”
And upload the text file you saved.
Check the PR of the links in your list.
Now you can sort by PR so that you spend your time on backlink targets that meet your page PR or homepage PR threshold:
Find Guest Post Opportunities
Searching for relevant, authoritative sites to guest post on is one the most monotonous link building tasks on the planet. Armed with ScrapeBox you can find thousands of potential guest post targets — and weed out low PR sites — in a matter of minutes.
Start off with a few footprints that sites which accept guest posts tend to have, such as:
allintitle:guest post guidelines
intitle:write for us
“guest blogger”
And combine them with your target keywords.
Harvest your results. But this time, you want to delete duplicate domains. After all, you only need to see one published guest post or list of guest blogger guidelines to know that they accept guest posts.
Click “Remove/Filter” and choose “Remove Duplicate Domains. ”
Check the PR. Because the PR of the guest post guidelines page doesn’t matter, choose the “Get Domain PageRank” option. This will show you the site’s homepage PR.
Now sort by PR and get crackin’!
Outbound Link Checker
You already know that PageRank is finite. And it’s a waste to work your tail off to land a backlink on a high PR page if it’s going to be surrounded by hundreds of others. Fortunately, using ScrapeBox, you can instantly find the number of outbound links of any page (or pages).
Click Addons? Show available addons. Choose the ScrapeBox Outbound Link Checker.
Click “Install Addon”
If you have a list of domains loaded into ScrapeBox from a harvest you can use those. Open the program from the addon menu and click “Load List. ” Click “Load from ScrapeBox. ”
If you’d prefer, you can upload the list of URLs from a text file. Copy and paste your target pages into a text file. Then click “Load List” from the addon and “Load from File. ”
When the URLs display in the addon, click “Start. ”
And the addon will display the number of internal and external links.
If you want to maximize the link juice you get from each link you may want to limit your targets to pages with 50-100 or fewer external links. To do that, click the “Filter” button.
And choose your threshold:
And the addon will automatically delete any URLs with 100 or more external links.
Find and Help Malware Infected Sites
A labor-intensive, but effective, white hat link building strategy is to help webmasters with infected sites. Some site owners neglect their sites for months at a time — leaving them ripe for hackers. If you can swoop in and save the day, they’ll often be more than happy to reward you with a link. You can find dozens of niche-relevant infected sites using ScrapeBox.
There’s no footprint to use for malware infected sites. However, the CMS Pligg tends to have an unusual amount of infections. You can find Pligg sites using these two footprints:
“Five character minimum”
Once the URLs are loaded up, install the Malware and Phishing Filter addon.
Start the addon, and choose “Load URLs from Harvester. ”
Click “Start. ”
The tool will show you if your list has any infected sites.
If you find any, do not visit the sites! They can (and will) infect your PC with malware.
Instead, choose “Save Bad URL’s to File”.
And save the list.
Were’ going to use another ScrapeBox addon to get the contact information of the infected site owners: ScrapeBox Whois scraper. This tool allows you to find the Whois information for the infected sites without having to actually visit them.
Once installed, open the addon. Load your file of infected sites.
Once finished you’ll see a list of names, emails, etc.
Save the file. Now put on your cape, reach out to the infected site owners and go save the day!
Local SEO Citation Reverse Engineering
If you do local SEO, already you know that citations are the lifeblood of your campaign. However, reverse engineering your competition’s local citations using OSE or other tools doesn’t always work. Why? Because NAP (Name, Address, and Phone Numbers) citations aren’t always backlinks and therefore don’t show up in link analysis tools. And without being able to reverse engineer, local community pages and directories almost impossible. But not with ScrapeBox.
For this example, let’s assume you’re trying to rank a dentist in Pawtucket, Rhode Island. First, conduct a local search in Google:
And visit the site of one of the top results.
Look for their address on the sidebar or contact us page.
And copy that address into the keyword area of ScrapeBox.
Important: Make sure the address is on a single line.
And put the street address in quotes (if you don’t, the search engines will sometimes return results that don’t have that exact street address on the page).
And add a few variations of the street name. This way, if the citation is listed as “ave. ” instead of “avenue” or “rd. ” instead of “road, ” you’ll still find it.
Finally, you don’t want to see pages of the business you’re reverse engineering in the results. And if the site has their address listed in the sidebar or footer (as many local businesses do), you’ll find that your results are littered with hundreds of pages from that domain. You can avoid this by adding the -site: operator to your keyword search. This operator prevents any results from that site from showing up in your search.
Add this to the end of the keywords that you already entered into ScrapeBox.
Hit “Start harvesting. ” And you should find tons of otherwise impossible-to-find citation targets:
White Hat Blog Commenting
While ScrapeBox is infamous for its automatic blog commenting feature, it is surprisingly useful for white hat blog commenting. You can use ScrapeBox to quickly find tons of authoritative, niche-targeted pages to drop manual blog comments on.
First, enter one of these example footprints (there are hundreds) for finding pages that allow blog comments into the ScrapeBox harvester:
“you must be logged in to comment” (This is a good one because sites that require login usually don’t get spammed to death. )
“post a comment”
“post new comment”
Then, enter a few niche keywords into the keyword field.
Click “Start Harvesting. ” When you get your results you should sort them by PR and delete any that fall below a certain threshold (for blog commenting. it’s best to sort by PAGE PR, not the homepage PR).
Sort by PR:
And delete any that don’t seem worthwhile.
If you have a large list and want to choose your targets carefully, you might also want to check the number of outbound links.
This time, load the list from ScrapeBox:
And filter out any that seem to have too many external links.
Now you can save the results and use that as your working list.

Frequently Asked Questions about scrapebox backlinks

How do you add backlinks to a ScrapeBox?

ScrapeBox is a piece of software that costs $97. This software collects (or scrapes obviously) information off of the internet. Some of its features include: Harvesting of proxies.Sep 10, 2019

What is ScrapeBox software?

The footprint is what you include if you want to look for something that tends to appear on certain sites. For example, “Powered by WordPress” is a common footprint used to find WordPress blogs. … For example, if you enter the keyword “weight loss,” ScrapeBox would automatically search for: site:. gov weight loss.Feb 16, 2019

How use ScrapeBox backlink checker?

Leave a Reply

Your email address will not be published. Required fields are marked *