Scrapebox Seo Software


Fast Multi-Threaded Operation
Exceptionally fast operation with multiple concurrent connections.?
Highly Customizable
Numerous options for expansion and customization to suit your needs.?
Outstanding Value
Hundreds of features to complement your SEO at an affordable price.?
Numerous Addons
Over 30 free addons, to expand ScrapeBox with numerous new features.?
Great Support
Numerous support video’s, guides and 24/7 tech support staff available.?
Tried And Tested
Originally released in 2009 and still going strong in 2017 with frequent updates.
Described by many users as the Swiss Army Knife of SEO! Years later, people are still finding innovative new uses for ScrapeBox to help with their day to day SEO and Internet Marketing needs.
Search Engine Harvester
Harvest thousands of URL’s from over 30 search engines such as Google, Yahoo and Bing in seconds with the powerful and trainable URL harvester.
Keyword Harvester
Extensive keyword harvester, to produce thousands of long-tail keywords from a single base keyword.
Proxy harvester
Powerful proxy harvester and tester, to ensure you can keep your work private through the use of thousands of free proxies.
Comment Poster
Use the fast, and trainable multi-threaded poster to leave comments on dozens of platforms with your backlink and desired anchor text.
Link Checker
Quickly scan thousands of pages to verify your backlinks exist, and the anchor text with the fast multi-threaded backlink checker.
Numerous Tools
Download Videos, Create RSS Feeds or Sitemaps, Find Unregistered Domains, Extract Emails, Check Indexed Pages and dozens more time saving features.
Harvest thousands of URL’s from Google, Yahoo and Bing and 30 other search engines in seconds! With inbuilt footprints for finding numerous platforms like WordPress, Forums, Guestbooks etc.
You can gather lists of links that are highly relevant to your keywords and niche. Great for researching competitors, finding new blogs to comment on, doing product research or even gathering facts and info for your next blog post or article.
You also have the ability to easily add your own search engines to harvest from virtually any site. You can add specific country based search engines, or even create a custom engine for a WordPress site with a search box to harvest all the post URL’s from the website. If the site has a search box, chances are ScrapeBox can work with it!
See More
Mass Link Builder
Trainable poster means blog commenting has never been easier, you can make thousands of blog comments in minutes. Have you ever wanted to populate your blog with comments so it appears more popular?
Are you in desperate need of backlinks? The ScrapeBox blog commenter doesn’t just post on your own blogs, you can post comments on dozens of different blog platforms, guestbooks, image platforms, trackbacks and even contact forms.
This will help boost your exposure in all the search engines, receive a higher Pagerank and send a flood of traffic to your sites from readers of the thousands of blogs clicking your link in the comments.
Proxy Harvester
ScrapeBox can harvest proxies from various websites and forums which publish proxy lists, so you can add the URL’s of your favorite proxy websites.
ScrapeBox will visit these and fetch the published proxies, test them for working proxies and save the good proxies.
You can even add a custom proxy test, so you can test if proxies are working for FaceBook, Twitter or any other site you choose besides just being anonymous.
There’s also country filters, port filters, speed filters to help you get the exact proxies you need. Some tools charge more than ScrapeBox just for this one feature!
Keyword Scraper
Keywords it’s hard to do keyword research without them right? Don’t worry ScrapeBox has you covered with it’s lightening fast keyword scraper.
There’s a gold mine of keywords out there amongst various “suggest” services like Google Suggest.
When you type in to search boxes of various services, many pop down suggestions for related and long tail searches.
These are highly valuable because they are based on what other people are typing and looking for. ScrapeBox can harvest these suggestions from many popular services making it possible to gather tens of thousands of keywords from a single base keyword.
ScrapeBox is a one-time purchase, it is NOT a monthly or yearly subscription. The purchase price is a single PC license and entitles you to run one copy of the software with one free license transfer per month, any bug fixes and minor upgrades are completely free for owners of ScrapeBox. Do we update it? You better believe it, ScrapeBox has had an amazing 500 new versions since it was originally released in 2009 through to ScrapeBox v2. 0 in 2021, yes that’s 11 years! We are constantly adding new features, listening to customer feedback and enhancing ScrapeBox.
ScrapeBox is a Windows and Apple Mac compatible software and works on Windows XP, Vista, Windows 7, Windows 8 and Windows 10, Apple Mac up to Big Sur. It also works on Windows Server 2003, 2008, 2012, 2016 and 2019 on both 32 and 64 Bit machines and it’s advisable to have a screen resolution larger than 1024x768px for optimal interface display due to the large amount of functionality, and a working internet connection is required.
Don’t be fooled by its simplicity: ScrapeBox is very powerful. You can easily streamline dozens of monotonous white hat link building processes with this tool.
In fact, many white hat SEO agencies consider the software one of their secret weapons.
An overwhelming amount of power that can help speed up daily tasks and production for even the purest of the pure, hardcore white-hat junkies.
Adrian BarrettSEO,
Though SB offers a host of other unholy functions (both gray and black hat), the keyword scraper module has a white hat soul. ScrapeBox’s keyword scraper is a multichannel suggest-box mining tool that easily transfers KW lists between the engines. The UI is brilliant in its simplicity. Other suggestion box tools exist but none are more profoundly useful.
Marty WeintraubFounder,
The Ultimate Guide to White Hat SEO using Scrapebox - Onely

The Ultimate Guide to White Hat SEO using Scrapebox – Onely

More than a year ago, on my G+ profile, I posted about something that I found funny: using Scrapebox for white hat. During this year a lot has changed, so now we know we need to focus more and more on the quality of the backlinks instead of quantity. This means that we have to rethink which tools should we use and how they can help us maximize our SEO.
Personally, like Bartosz mentioned in his blog post on LRT, I find Scrapebox very useful for every single SEO task I do connected with link analysis or link building.
Scrapebox – a forbidden word in SEO
I bet everybody knows Scrapebox, more or less. In short – it’s a tool used for mass scraping, harvesting, pinging and posting tasks in order to maximize the amount of links you can gain for your website to help it rank better in Google. A lot of webmasters and blog owners treat Scrapebox like a spam machine, but in fact it is only a tool, and it what it’s actually used for depends on the “driver”.
Now, due to all the Penguin updates, a lot of SEO agencies have changed their minds about linkbuilding and have started to use Scrapebox as support for their link audits or outreach.
Scrapebox – general overview
You can skip this section if you know Scrapebox already. If not – here is some basic information about the most important functions you can use.
Scrapebox is cheap. Even without the discount code, it costs $97. You can order ScrapeBox here.
In this field, you can put the footprint you want to use for harvesting blogs/domains/other resources. You can choose from the Custom option and predefined platforms. Personally, I love to use the “Custom footprint” option because it allows you to get more out of each harvest task
Here, you can post keywords related to your harvest. For example, if you want to get WordPress blogs about flowers and gardening, you can post “flowers” and “gardening” along with the custom footprint “Powered by WordPress”. It will give you a list of blogs containing these keywords and this footprint.
The URL’s Harvested box shows the total amount of websites harvested. Using the option number 6, you can get even more from each results list.
Select Engines & Proxies allow you to choose which search engine you want to get results from, and how many of them to harvest. For link detox needs or competition analysis, I recommend making use of Bing and Yahoo as well (different search engines give different results, which results in more information harvested). Also, you can post the list of proxies you want to use and manage them by checking if they are alive, and not blocked by Google and so on. After that, you can filter your results and download them as a file for further usage.
Comment Poster allows you to post comments to a blog list you have harvested, but in our White Hat tasks – we do not use it. Instead of that, we can use it to ping our links to get them indexed faster.
Scrapebox – Addons
By default, Scrapebox allows you to use a lot of different addons to get more and more from your links. You can find them by clicking “Addons” in the top menu in the main interface. Here is our list of addons:
To get more addons You can click on “Show available addons”. Also, remember about premium plugins, which can boost your SEO a lot.
Keyword Scraper – the very beginning on your link building
One of the most massive things in Scrapebox that I use all the time is the integrated Google suggested keywords scraper. It works very simply and allows you to get a list of keywords you should definitely use while optimizing your website content or preparing new blog post very, very quickly. To do this, just click on the “Scrape” button in the “Harvester” box and select “Keyword Scraper”. You will see a Keyword Scraper window like this one:
The fun starts right now. On the left side, simply put a list of keywords related to your business or blog and select Keyword Scraper Sources. Later, select the search engine you want to have research done on and hit the “Scrape” button.
As you can see on the screenshot above, you can also select the total “level” for the keyword scraper. For most keyword research tasks, it’s okay to have it on 2, but when it’s specific for each niche you want to target (for example for cooking blogs, it should be level 4 to get more keywords related to specific recipes or kitchen tips and tricks), you can adjust it up to 4. Remember that the higher level you choose, the longer it will take to see results.
After that, do a quick overview of the results you’ve got – if you see some superfluous keywords you don’t want to have in your keywords list, use “Remove” from the drop down list to remove keywords containing/not containing specified string or entries from a specified source.
If the list is ready – you can send it to ScrapeBox for further usage or just copy and save to your notepad for later.
Now: let’s start our Outreach – scrape URLs with Scrapebox
So: we have our keyword research done (after checking the total amount of traffic that keywords can bring to your domain) – now let’s see if we can get some interesting links from specified niche websites.
After sending our URL list to ScrapeBox we can now start searching for specified domains we would like to get links from.
Footprints – what they are and how to build them
Footprints are (in a nutshell) pieces of code or sentences that appear in a website’s code or in text. For example when somebody creates a WordPress blog, he has “Powered by WordPress” in his footer by default. Each CMS can have its very own footprints connected both with content or the URL structure. To learn more about footprints, you should test top Content Management Systems or forum boards to check if they index any repeatable pieces of code.
How to build footprints for ScrapeBox
Firstly, learn more about Google Search Operators. For your basic link building tasks you should know and understand these three search operators:
Inurl: – shows URLs containing a specified string in their address
Intitle: – shows URLs which have a title optimized for a specified text string
Site: – lists domains/URLs/links from a specified domain, ccTLD etc.
So if you already know this, do a test search answering questions related to your business right now:
Do I need do follow links from blogs and bloggers related to my niche?
Do I need backlinks from link directories to boost my SEO for one specified money keyword?
Should these links be do follow only?
On which platforms I can easily share my product/services and why?
Got it? Nice! Now let’s move to the next step – creating our footprint:
So let’s say that you are the owner of a marketing blog related to CPC campaigns and conversion rate optimization. The best idea to get new customers for your services is:
Manual commenting on specified blogs
Creating and posting guest posts on other marketing blogs related to your business
Being in top business link directories which allow you to post a lot information about your business
Let’s state that we need top 100 links where we can post a comment/get in touch with bloggers and contact them for any guest postings.
From our experience and after we did keyword research with Keyword Scraper in ScrapeBox, we’ve noticed that the top platform for blogging about marketing is WordPress – both on our own domain and on free platform.
To get the top 100 blogs related to our needs you can simply use:
“Powered by WordPress” + AdWords AND
This means that we want to search for WordPress blogs on Polish TLD domains with “AdWords” in every single part of the site. However, the results may not be so well-targeted if you fail to use advanced operators you can use search operators where a specified string can be found.
Use footprints in ScrapeBox
Now, after you’ve learned the basics of footprints, you can use them to get specific platforms which will allow you to post a link to your website (or find new customers if you would like to guest blog sometimes).
To do that, simply put them here:
You can combine footprints with advanced search engine commands like site:, inurl or intitle to get only these URLs.
Advanced search operators and footprints have to be connected with the keywords we want to target so as to find more, better pages to link from.
For example you can search only for domains () containing specified keyword in URL (inurl) and title (intitle). Now the URL list will be shorter, but it will contain only related keywords matching our needs.
Expert’s Tip:
For your product or service outreach, you can harvest a lot of interesting blogs hosted on free blog network sites like, or your language-related sites. Links from these pages will have different IP addresses, so they can be really valuable for your rankings.
Find Guest Blogging opportunities using ScrapeBox
By using simple footprints like:
“guest blogger” or “guest post” (to search only for links where somebody posted a guest post already – you can also use the allinurl search operator because a lot of blogs have a “guest posts” category which can be found in its URL structure)
Later, combine it with your target keywords and get ready to mail and post fresh guest posts to share your knowledge and services with others!
Check the value of the harvested links using ScrapeBox
Now, when your keyword research is done and you have harvested your very first links list, you can start with checking some basic information about the links. Aside from ScrapeBox, you will also need MozAPI.
Start with trimming to domain
In general, our outreach is supposed to help us build relationships and find customers. This means that you shouldn’t be only looking at a specific article, but rather the whole domain in general. To do that, select the “Trim to root” option from the Manage Lists box:
Later, remove duplicates by clicking the Remove/Filter button and select “Remove duplicate URLs”.
Check Page Rank in ScrapeBox
Start with checking Page Rank – even if it’s not the top ranking factor right now, it still provides basic information about the domain. If the domain has a page rank higher than 1 or 2, this means that it’s trusted and has links from other related/hight PR sources.
To check Page Rank in ScrapeBox, simply click on “Check Page Rank” button and select “Get domain Page Rank”:
To be 100% sure that each domain legit PR – use “ScrapeBox Fake Page Rank Checker”. You can find it in the Addons Section in your ScrapeBox main window.
I tend to say that it’s not a good idea to believe in any 3rd party tools results about Link Trust (because it’s hard to measure if link is trusted or not), although it’s another great sign if a link’s every single result is “green”.
To check Domain Authority in ScrapeBox you can use the Page Authority addon. You can find it in your Addons list in ScrapeBox. To get it to work you will have to get your very own Moz API information (the window will appear after you select the addon).
This provides a quick overview of your links list. You can get information about the Page/Domain Authority, MozRank and the amount of external links pointing to the domain/page. With that, you can see if a URL is worthy of your link building tactics and all the work you plan to put in or not.
Remember: Do not rely on MozRank or Page/Domain authority only.
To get top links, try to look for average ones – a lot of backlinks with medium MozRank/Page/Domain authority.
Email scraping from a URL list using ScrapeBox
After you’ve harvested your first link list, you will probably want to get in touch with bloggers to start your outreach campaign. To do this effectively, use the Scrapebox Email Scraper feature. Simply click on the Grab/Check button and select to grab emails from harvested URLs or from a local list:
The results may not be perfect, but they can really give you a lot of useful information. You can export data to a text file and sort them by email addresses to find connections between domains.
Merge and remove duplicates using ScrapeBox
If you are running a link detox campaign, it’s strongly recommended to use more than one backlink source to get all of the data needed to lift a penalty, for example. For example, if you have more than 40 thousand in each file, you will probably want to merge them into one file and dig into it later.
To do this quickly, install the DupeRemove addon from the available addon list. After running it, this window will pop up:
Now simply choose “Select source files to merge” and go directly to the folder with the different text files with URL addresses. Later press “Merge files” to have them all in one text file.
To remove Duplicate URLs or Domains “Select Source file” and choose where to export non duplicated URLs/Domains. Voila! You have one file containing every single backlink you need to analyze.
For those who like to do things in smaller parts – you have the option of splitting a large file into smaller ones. Select your text file with backlinks and choose how many lines per file it should contain. From my point of view, it’s very effective to split your link file into groups of 1000 links per file. It’s very comfortable and gives you the chance to manage your link analysis tasks.
ScrapeBox Meta Scraper
ScrapeBox allows you to scrape titles and descriptions from your harvested list. To do that, choose the Grab/Check option then, from the drop down menu, “Grab meta info from harvested URLs”:
Here, you can take a look at some example results:
You can export this data to an CSV file and use it to check how many pages use an exact match keyword in the title or optimize it some other way (i. e., do the keywords look natural to Google and not Made For SEO? ).
Check if links are dead or alive with ScrapeBox
If you want to be pretty sure that every single intern/external link is alive you can use the “ScrapeBox Alive Checker” addon. First – if you haven’t done this yet – install the Alive Checker addon.
Later, to use it, head to the Addons list and select ScrapeBox Alive Check.
f you were previously harvesting URLs – simply load them from Harvester. If not, you can load them from the text file.
Now, let’s begin with Options:
Also, remember to have the checkbox for “Follow relocation” checked.
The results can be seen here:
If a link returns HTTP status code different than 301 or 200 it means “Dead” for ScrapeBox.
Check which internal links are not indexed yet
So if you are working on some big onsite changes connected with the total amount of internal pages you will probably want to be pretty sure that Google re-indexes everything. To sure that everything is as it should be, you can use Screaming Frog, SEO Spider and ScrapeBox.
So start crawling your page in Screaming Frog, using the very basic setup in the crawler setting menu:
f you are a crawling huge domain – you can use a Deep Crawl tool instead of the Screaming Frog SEO Spider.
Later, when your crawl is done, save the results in the file, open it and copy it to Clipboard or export it to a file it with one click in ScrapeBox:
When your import is done, simply hit the Check Indexed button and select the Google Indexed option.
Remember to set up the Random Delay option for indexing and checking and total amount of connections based on your internet connection. Mostly, I use 25 connection and Random Delay between each query sent by ScrapeBox to be sure that my IP/Proxy addresses won’t be blocked by Google.
After that, you will get a pop up with information about how many links are indexed or not, and there will be an extra column added to your URLs harvested box with information about whether they are Indexed or not:
You can export unindexed URLs for further investigation.
Get more backlinks straight from Google using ScrapeBox
“Some people create free templates for WordPress and share them with others to both help people have nicely designed blogs and obtain free dofollow links from a lot of different TLDs. ”
Sometimes it’s not enough to download backlink data from Google Webmaster Tools or some other software made for that (although Bartosz found a real nice “glitch” in Webmaster Tools to get more links).
In this case – especially when you are fighting a manual penalty for your site and Google has refused to lift it – go deep into these links and find a pattern that is the same for every single one.
For example – if you are using automatic link building services with spun content, sometimes you can find a sentence or string that is not spun. You can use it as a footprint, harvest results from Google, and check if your previous disavow file contained those links or not.
And another example – some people create free templates for WordPress and share them with others to both help people have nicely designed blogs and obtain free dofollow links from a lot of different TLDs. Here is an example:
“Responsive Theme powered by WordPress”
This returns every single domain using the kind of theme from Cyberchimps. If you will combine it with the keywords you were linking to your site, you will probably get a very big, nice, WordPress blog list. You can combine it with keywords you want to target to get more related and 100% accurate results.
Check external links on your link lists
After you have done your first scrape for custom made footprint it’s good to know what is the quality of links you have found. And once againg – ScrapeBox and its amazing list of Addons will help you!
“Outbound Link Checker” is a addon which will cheack links line by line and list both internal and external links. Because addon works fine supports multithread technology you can check tousands of links at the same time.
To use “Outbound Link Checker” go to your Addons list and selec Outbound Link Checker:
Next, choose to load a URL list from ScrapeBox or from an external file.
After that, you will see something like this:
The magic starts now – simply press the “Start” button.
Now you can filter the results if they contain more than X outgoing links. Later, you can also check the authority of those links and how valuable they are.
Short Summary
As you can see – ScrapeBox in the Penguin era is still a powerful tool which will speed up your daily SEO tasks if used properly. Even if you do not want to post comments or links manually, it can still help you find links where you can get both traffic and customers.
Working across the technical spectrum of SEO, Onely provides strong commercial value to clients through cutting-edge solutions.
Advanced ScrapeBox Link Building Guide - QuickSprout

Advanced ScrapeBox Link Building Guide – QuickSprout

Utter the word “ScrapeBox” around a group of white hat SEOs and you’ll notice a sea of icy glares pointing in your direction. At some level, this ire is understandable: most of those annoying, spammy blog comments you see in your WordPress Akismet spam folder likely stemmed from ScrapeBox. But like any tool, ScrapeBox is all about how you use it. In fact, many white hat SEO agencies consider the software one of their secret weapons. And in this chapter I’m going to teach you how to use ScrapeBox for good…not evil.
ScrapeBox 101
For those of you new to this tool, ScrapeBox essentially does two things: scrapes search engine results and posts automatic blog comments. We’re going to ignore blog commenting feature because that’s a spammy tactic that doesn’t work. Don’t be fooled by its simplicity: ScrapeBox is very powerful. You can easily streamline dozens of monotonous white hat link building processes with this tool.
But before we get into that, let me give you a quick primer on how the tool works.
There are 4 boxes in the ScrapeBox user interface. Here’s what they do:
We’re going to ignore the bottom right corner as this is only used for automatically posting blog comments.
There’s one other important area to point out: manage lists.
This is where you can easily sort and filter the results that ScrapeBox finds for you.
How to Harvest Results
Let’s start with the “Harvester” area.
There are two important sections here:
And “Keywords”
The footprint is what you include if you want to look for something that tends to appear on certain sites. For example, “Powered by WordPress” is a common footprint used to find WordPress blogs.
Let’s say you wanted to find websites that have pages about nutrition. First, you’d put in the footprint field.
And you’d include any keywords that you want to combine with the footprint. For example, if you enter the keyword “weight loss, ” ScrapeBox would automatically search for: weight loss.
You can add hundreds of keywords and ScrapeBox will automatically combine them with your footprint. When you’re ready to scrape, head down to the search engine and proxy settings.
And choose which search engines you want to use and how many results you want to find. I usually stick to Google and scrape 500-1000 results (after about 200 results, most of the results that you get are either irrelevant or from sites without much authority).
When you have a few search strings set up, click “Start Harvesting”:
You’ll now have a list of URLs in your “URL’s Harvested” area:
Checking PR
To get the most from your scraped list, you should check the PR of each page that you found. Under “Manage Lists”, choose “Check PageRank”:
Choose “Check URL PageRank”.
Now you can easily sort the pages by PR.
And delete pages that fall below a threshold. Let’s say you don’t want pages with a PR below 3. Scroll until you see pages with a PR2.
Click and scroll to highlight those results (you can also hold shift and use the direction buttons on your keyboard to select):
Right click and choose “Remove selected URL from list”
Filtering Your Results
If you scrape from multiple search engines you’ll probably get a few duplicate results in your list. You can easily remove them from your list by clicking the “Remove/Filter” button.
And choose “Remove Duplicate URLs. ”
If you don’t want the same domain showing up in your results, you can delete duplicate domains by choosing “Remove Duplicate Domains” from the Remove/Filter options:
Now you have a clean list sorted by PR. You can export that information to Excel or a text file by clicking the “Export URL List” button. And choosing the export option that works best for you (I personally like Excel).
Using Proxies
If you’re going to be using ScrapeBox on a regular basis, proxies are a must. If you scrape from your personal IP regularly, Google will likely ban it. Meaning no more Google searches. Fortunately, you can find free, working public proxies fairly easily.
And you don’t need any technical skills to set them up.
Using ScrapeBox’s Built In Service
ScrapeBox has a cool feature that actually finds and adds free proxies for you.
Head over to the “Select Engines and Proxies” box. Hit “Manage”:
In the next window, choose “Harvest Proxies. ” Choose all of the supported sources. Click “Start. ”
Hit “Apply”.
It’s important to test the proxies before using them. If you use non-functional proxies, ScrapeBox simply won’t work. Hit the “Test Proxies” button.
Choose “Test all proxies. ”
Wait for ScrapeBox to test the proxies (it can take a while). When it’s finished you’ll see something like this:
Hit the filter button and choose “Keep Google Proxies. ”
Hit the save button and choose “Save selected proxies to ScrapeBox. ”
This will save the working, Google-approved proxies.
Now that you have a handle on how it works it’s time to use ScrapeBox to help you build incredible backlinks.
Resource Page Link Building
Resource page link building is one of the most under-utilized white hat link building strategies on the planet. Where else can you find pages that exist for the sole purpose of linking out to other sites? However, most people shy away from this strategy because it’s extremely time consuming to find resource pages, hunt for contact information and reach out to webmasters. Fortunately, you can dramatically streamline the resource page link building process with ScrapeBox.
First, enter one of these tested footprints into ScrapeBox:
In conjunction with niche-related keywords.
And hit “Start Harvesting. ”
Sort your pages by PR to focus on the highest-value targets.
Now export your list, check for broken links, or just email site owners and beg for a link!
Competitor Backlink Analysis
There’s nothing better than reverse engineering your competition. It’s one of the only ways to quickly find an incredible list of high-value, niche related sites to get links from. While OSE, Majestic and Ahrefs are fantastic tools, they’re hard to use for sites with thousands of links. Enter ScrapeBox.
Open ScrapeBox and click Addons? Show available addons.
Choose ScrapeBox Backlink Checker 2:
And click “Install Addon. ”
For the addon to work, you need to have your competitor’s homepage in the harvester results area. To do this, just enter the site’s name:
Set the results to 10.
And scrape the results.
Delete any pages that you’re not interested in grabbing backlink information from.
Go back to the Addon menu and select the Backlink checker addon.
Click “Load URL List. ” Choose “Load from ScrapeBox Harvester. ”
Hit “Start. ”
When it’s done, choose “Download Backlinks. ”
And save the file as a file.
Close the Backlink checker and head back to the ScrapeBox main menu. Under “Manage Lists” choose “Import URL List. ”
And upload the text file you saved.
Check the PR of the links in your list.
Now you can sort by PR so that you spend your time on backlink targets that meet your page PR or homepage PR threshold:
Find Guest Post Opportunities
Searching for relevant, authoritative sites to guest post on is one the most monotonous link building tasks on the planet. Armed with ScrapeBox you can find thousands of potential guest post targets — and weed out low PR sites — in a matter of minutes.
Start off with a few footprints that sites which accept guest posts tend to have, such as:
allintitle:guest post guidelines
intitle:write for us
“guest blogger”
And combine them with your target keywords.
Harvest your results. But this time, you want to delete duplicate domains. After all, you only need to see one published guest post or list of guest blogger guidelines to know that they accept guest posts.
Click “Remove/Filter” and choose “Remove Duplicate Domains. ”
Check the PR. Because the PR of the guest post guidelines page doesn’t matter, choose the “Get Domain PageRank” option. This will show you the site’s homepage PR.
Now sort by PR and get crackin’!
Outbound Link Checker
You already know that PageRank is finite. And it’s a waste to work your tail off to land a backlink on a high PR page if it’s going to be surrounded by hundreds of others. Fortunately, using ScrapeBox, you can instantly find the number of outbound links of any page (or pages).
Click Addons? Show available addons. Choose the ScrapeBox Outbound Link Checker.
Click “Install Addon”
If you have a list of domains loaded into ScrapeBox from a harvest you can use those. Open the program from the addon menu and click “Load List. ” Click “Load from ScrapeBox. ”
If you’d prefer, you can upload the list of URLs from a text file. Copy and paste your target pages into a text file. Then click “Load List” from the addon and “Load from File. ”
When the URLs display in the addon, click “Start. ”
And the addon will display the number of internal and external links.
If you want to maximize the link juice you get from each link you may want to limit your targets to pages with 50-100 or fewer external links. To do that, click the “Filter” button.
And choose your threshold:
And the addon will automatically delete any URLs with 100 or more external links.
Find and Help Malware Infected Sites
A labor-intensive, but effective, white hat link building strategy is to help webmasters with infected sites. Some site owners neglect their sites for months at a time — leaving them ripe for hackers. If you can swoop in and save the day, they’ll often be more than happy to reward you with a link. You can find dozens of niche-relevant infected sites using ScrapeBox.
There’s no footprint to use for malware infected sites. However, the CMS Pligg tends to have an unusual amount of infections. You can find Pligg sites using these two footprints:
“Five character minimum”
Once the URLs are loaded up, install the Malware and Phishing Filter addon.
Start the addon, and choose “Load URLs from Harvester. ”
Click “Start. ”
The tool will show you if your list has any infected sites.
If you find any, do not visit the sites! They can (and will) infect your PC with malware.
Instead, choose “Save Bad URL’s to File”.
And save the list.
Were’ going to use another ScrapeBox addon to get the contact information of the infected site owners: ScrapeBox Whois scraper. This tool allows you to find the Whois information for the infected sites without having to actually visit them.
Once installed, open the addon. Load your file of infected sites.
Once finished you’ll see a list of names, emails, etc.
Save the file. Now put on your cape, reach out to the infected site owners and go save the day!
Local SEO Citation Reverse Engineering
If you do local SEO, already you know that citations are the lifeblood of your campaign. However, reverse engineering your competition’s local citations using OSE or other tools doesn’t always work. Why? Because NAP (Name, Address, and Phone Numbers) citations aren’t always backlinks and therefore don’t show up in link analysis tools. And without being able to reverse engineer, local community pages and directories almost impossible. But not with ScrapeBox.
For this example, let’s assume you’re trying to rank a dentist in Pawtucket, Rhode Island. First, conduct a local search in Google:
And visit the site of one of the top results.
Look for their address on the sidebar or contact us page.
And copy that address into the keyword area of ScrapeBox.
Important: Make sure the address is on a single line.
And put the street address in quotes (if you don’t, the search engines will sometimes return results that don’t have that exact street address on the page).
And add a few variations of the street name. This way, if the citation is listed as “ave. ” instead of “avenue” or “rd. ” instead of “road, ” you’ll still find it.
Finally, you don’t want to see pages of the business you’re reverse engineering in the results. And if the site has their address listed in the sidebar or footer (as many local businesses do), you’ll find that your results are littered with hundreds of pages from that domain. You can avoid this by adding the -site: operator to your keyword search. This operator prevents any results from that site from showing up in your search.
Add this to the end of the keywords that you already entered into ScrapeBox.
Hit “Start harvesting. ” And you should find tons of otherwise impossible-to-find citation targets:
White Hat Blog Commenting
While ScrapeBox is infamous for its automatic blog commenting feature, it is surprisingly useful for white hat blog commenting. You can use ScrapeBox to quickly find tons of authoritative, niche-targeted pages to drop manual blog comments on.
First, enter one of these example footprints (there are hundreds) for finding pages that allow blog comments into the ScrapeBox harvester:
“you must be logged in to comment” (This is a good one because sites that require login usually don’t get spammed to death. )
“post a comment”
“post new comment”
Then, enter a few niche keywords into the keyword field.
Click “Start Harvesting. ” When you get your results you should sort them by PR and delete any that fall below a certain threshold (for blog commenting. it’s best to sort by PAGE PR, not the homepage PR).
Sort by PR:
And delete any that don’t seem worthwhile.
If you have a large list and want to choose your targets carefully, you might also want to check the number of outbound links.
This time, load the list from ScrapeBox:
And filter out any that seem to have too many external links.
Now you can save the results and use that as your working list.

Frequently Asked Questions about scrapebox seo software

What is ScrapeBox tool?

Scrapebox – a forbidden word in SEO. I bet everybody knows Scrapebox, more or less. In short – it’s a tool used for mass scraping, harvesting, pinging and posting tasks in order to maximize the amount of links you can gain for your website to help it rank better in Google.

What is the footprint on ScrapeBox for?

The footprint is what you include if you want to look for something that tends to appear on certain sites. For example, “Powered by WordPress” is a common footprint used to find WordPress blogs. … For example, if you enter the keyword “weight loss,” ScrapeBox would automatically search for: site:. gov weight loss.Feb 16, 2019

How do you use a ScrapeBox?

Leave a Reply

Your email address will not be published. Required fields are marked *