Seo, in its the majority of standard sense, trusts something above all others: Online search engine spiders crawling and indexing your website.
However almost every website is going to have pages that you do not want to include in this expedition.
In a best-case scenario, these are doing nothing to drive traffic to your site actively, and in a worst-case, they could be diverting traffic from more important pages.
Fortunately, Google permits web designers to tell search engine bots what pages and content to crawl and what to ignore. There are a number of ways to do this, the most common being using a robots.txt file or the meta robotics tag.
We have an excellent and in-depth explanation of the ins and outs of robots.txt, which you must absolutely read.
But in top-level terms, it’s a plain text file that lives in your website’s root and follows the Robots Exclusion Procedure (ASSOCIATE).
Robots.txt supplies spiders with instructions about the website as a whole, while meta robots tags consist of instructions for particular pages.
Some meta robots tags you may employ include index, which tells search engines to include the page to their index; noindex, which tells it not to include a page to the index or include it in search results page; follow, which advises an online search engine to follow the links on a page; nofollow, which informs it not to follow links, and an entire host of others.
Both robots.txt and meta robotics tags work tools to keep in your toolbox, however there’s also another method to advise search engine bots to noindex or nofollow: the X-Robots-Tag.
What Is The X-Robots-Tag?
The X-Robots-Tag is another way for you to control how your web pages are crawled and indexed by spiders. As part of the HTTP header reaction to a URL, it manages indexing for a whole page, as well as the specific elements on that page.
And whereas utilizing meta robots tags is fairly simple, the X-Robots-Tag is a bit more complicated.
But this, of course, raises the concern:
When Should You Use The X-Robots-Tag?
According to Google, “Any instruction that can be used in a robotics meta tag can likewise be defined as an X-Robots-Tag.”
While you can set robots.txt-related directives in the headers of an HTTP response with both the meta robots tag and X-Robots Tag, there are specific situations where you would wish to utilize the X-Robots-Tag– the 2 most typical being when:
- You wish to control how your non-HTML files are being crawled and indexed.
- You wish to serve regulations site-wide rather of on a page level.
For instance, if you wish to block a specific image or video from being crawled– the HTTP response method makes this simple.
The X-Robots-Tag header is also beneficial since it permits you to integrate several tags within an HTTP reaction or use a comma-separated list of instructions to define directives.
Maybe you don’t desire a specific page to be cached and desire it to be not available after a specific date. You can use a mix of “noarchive” and “unavailable_after” tags to instruct search engine bots to follow these directions.
Basically, the power of the X-Robots-Tag is that it is far more versatile than the meta robots tag.
The benefit of utilizing an X-Robots-Tag with HTTP reactions is that it allows you to use routine expressions to execute crawl directives on non-HTML, as well as use parameters on a larger, international level.
To help you understand the distinction in between these instructions, it’s practical to categorize them by type. That is, are they crawler directives or indexer instructions?
Here’s a handy cheat sheet to describe:
|Spider Directives||Indexer Directives|
|Robots.txt– utilizes the user representative, permit, disallow, and sitemap directives to define where on-site online search engine bots are allowed to crawl and not permitted to crawl.||Meta Robotics tag– permits you to define and prevent search engines from revealing particular pages on a site in search results page.
Nofollow– enables you to define links that ought to not hand down authority or PageRank.
X-Robots-tag– permits you to manage how defined file types are indexed.
Where Do You Put The X-Robots-Tag?
Let’s state you wish to obstruct particular file types. A perfect method would be to include the X-Robots-Tag to an Apache configuration or a.htaccess file.
The X-Robots-Tag can be added to a website’s HTTP responses in an Apache server setup via.htaccess file.
Real-World Examples And Uses Of The X-Robots-Tag
So that sounds excellent in theory, but what does it appear like in the real world? Let’s have a look.
Let’s state we desired online search engine not to index.pdf file types. This setup on Apache servers would look something like the below:
In Nginx, it would appear like the below:
place ~ * . pdf$
Now, let’s take a look at a different circumstance. Let’s state we want to utilize the X-Robots-Tag to block image files, such as.jpg,. gif,. png, and so on, from being indexed. You could do this with an X-Robots-Tag that would look like the below:
Please keep in mind that comprehending how these regulations work and the impact they have on one another is vital.
For example, what happens if both the X-Robots-Tag and a meta robots tag are located when spider bots discover a URL?
If that URL is obstructed from robots.txt, then certain indexing and serving directives can not be discovered and will not be followed.
If instructions are to be followed, then the URLs consisting of those can not be disallowed from crawling.
Check For An X-Robots-Tag
There are a few different methods that can be utilized to check for an X-Robots-Tag on the website.
The easiest way to check is to set up a browser extension that will tell you X-Robots-Tag information about the URL.
Screenshot of Robots Exemption Checker, December 2022
Another plugin you can use to determine whether an X-Robots-Tag is being used, for instance, is the Web Designer plugin.
By clicking the plugin in your internet browser and navigating to “View Reaction Headers,” you can see the numerous HTTP headers being utilized.
Another approach that can be utilized for scaling in order to determine concerns on websites with a million pages is Screaming Frog
. After running a site through Shouting Frog, you can navigate to the “X-Robots-Tag” column.
This will reveal you which areas of the site are using the tag, in addition to which specific regulations.
Screenshot of Screaming Frog Report. X-Robot-Tag, December 2022 Utilizing X-Robots-Tags On Your Site Understanding and controlling how online search engine interact with your site is
the cornerstone of search engine optimization. And the X-Robots-Tag is a powerful tool you can utilize to do simply that. Simply know: It’s not without its dangers. It is extremely simple to make a mistake
and deindex your whole site. That stated, if you’re reading this piece, you’re probably not an SEO novice.
So long as you use it sensibly, take your time and check your work, you’ll find the X-Robots-Tag to be a beneficial addition to your arsenal. More Resources: Included Image: Song_about_summer/ Best SMM Panel