Advertisement
NewsSEO

Google Describes How To Remove A Website From Search Results

Google discusses three methods for hiding a website from search results and which one you should use based on your situation.

Although Google claims that using a password is the best way to hide a website from search results, there are other options to consider.

This topic is highlighted in the most recent installment of YouTube’s Ask Googlebot video series.

Google’s John Mueller responds to a question about how to prevent content from being indexed in search results and whether websites are allowed to do so.

“In a nutshell, yes,” Mueller says.

There are three methods for removing a website from search results:

Websites can either opt out of indexing entirely, or they can be indexed and use a password to hide content from Googlebot.

  • Use a password
  • Block crawling
  • Block indexing

Blocking content from Googlebot is not against webmaster guidelines as long as it is also blocked from users.

For example, if the site is password protected when crawled by Googlebot, it must also be password protected when accessed by users.

Alternatively, directives must be in place to prevent Googlebot from crawling or indexing the site.

You may encounter difficulties if your website serves different content to Googlebot than it does to users.

This is known as “cloaking,” and it is against Google’s policies.

With that distinction established, here are the proper methods for hiding content from search engines.

Read Important SEO KPIs You Need To Be Tracking For Success.

3 Methods for Hiding Content from Search Engines

1. Password Security

If you want to keep your website private, locking it down with a password is often the best approach.

A password ensures that neither search engines nor random web users can access your content.

This is a common practice for developing websites. Publishing the website live is a simple way to share in-progress work with clients while preventing Google from accessing a website that isn’t yet ready for viewing.

2. Block Crawling

Another way to prevent Googlebot from accessing your site is to disable crawling. This is accomplished through the use of the robots.txt file.

People can access your site via a direct link using this method, but it will not be picked up by “well-behaved” search engines.

According to Mueller, this isn’t the best option because search engines may still index the website’s address without accessing the content.

That is extremely unlikely, but it is something you should be aware of.

3. Block Indexing

The third and final option is to prevent your website from being indexed.

You do this by including a no-index robots meta tag on your pages.

A no-index tag instructs search engines not to index a page until it has been crawled.

The meta tag is not visible to users, and they can still access the page normally.

Mueller’s Closing Remarks

Mueller concludes the video by stating that Google’s top recommendation is to use a password:

“Overall, for private content, our recommendation is to use password protection. It’s easy to check that it’s working, and it prevents anyone from accessing your content.

Blocking crawling or indexing are good options when the content isn’t private. Or if there’s just parts of a website which you’d like to prevent from appearing in search.”

Need help with our free SEO tools? Try our free Robots.txt GeneratorGet Source Code of WebpageDomain into IP.

Learn more from SEO and read A Data Science Approach to Internal Link Structure Optimization.

Related Articles

Back to top button

Adblock Detected

Don't miss the best oppertunities.