Advanced SEO Techniques

robots.txt

Overview
Structured Data
Structured Data Overview
10:18
501
Adding JSON-LD Structured Data
16:29
502
Site Name JSON-LD
503
Articles, News & Blog Post JSON-LD
504
Site Search JSON-LD
506
Client Reviews JSON-LD
506
Breadcrumbs JSON-LD
508
FAQ Page JSON-LD
509
Q&A Page JSON-LD
510
Twitter Cards
601
Embedding Rich Text in JSON
702
Validating & Debugging JSON-LD
703
Technical SEO
Controlling the Robots
801
Sitemap.xml
802
robots.txt
803
Removing Pages from Google
804
Removing Your WEBFLOW.IO Staged Site from Google
805
More Advanced SEO Techniques
810
Voice Search
810
Partially-Dynamic Headings
3:35
811
Full Site SEO Text Search
3:35
812
Influencing Google Search Appearance
Influencing Google's Search Appearance
901
No items found.
Published
Updated
November 4, 2022
in lightbox

You've no doubt heard of /robots.txt.

It's your magic ticket to controlling what Google crawls, and therefore your path to SEO superstardom, right?

Nope. The truth is, if you're designing a site that's intended for the public to see, and use, you're better off forgetting robots.txt even exists.

You're much more likely to create problems, and damage your site's SEO than anything else.

Really, trust me. There is nothing good to be gained here, unless maybe you're building some private pages for the Department of Defense.

NOTE: You probably shouldn't tamper with the battery in your Tesla either.

Things You Probably Didn't Know

From Google's perspective, /robots.txt really only tells it what parts of your site to avoid looking at.

Why Does robots.txt Exist?

It was originally designed in the early days of the Internet when the web was the domain of universities and scholars who were playing with funky ideas.

Its job was to prevent crawlers from investigating parts of a website that it shouldn't. Perhaps because;

  1. That area of the site is dynamic, and random, and unpredictable. So much, that every time you visit a page, the content might be radically different, or that page might not be there anymore. So search-engine-indexing it serves no use.
  2. Because that area is fragile

Today, most websites exist to share content with the world. And that's certainly the whole point of Webflow.

Messing with /robots.txt is like licking your Tesla battery... probably a Bad Idea.

Just because you can, doesn't mean you should.

Mistakes People Make

From Google's perspective, /robots.txt tells it what parts of your site to avoid looking at. The META robots tag on a page tells it whether you want that page to be indexed or not.

These have different purposes, and sometimes that lack of understanding bites people in the butt.

Let's suppose you have a page in Google's search index, that you don't want to be there anymore. How do you remove it?

Many people will have a shotgun reaction here, and they'll try to block that page everywhere;

  • They will jump into /robots.txt and tell Google to stop crawling that page.
  • And they'll also add the META noindex tag to the page.

But the result is probably not what they wanted.

GoogleBot will check the /robots.txt first, and see that it tells them not to crawl that page. So, as requested, it won't...

And, it will never even see the META noindex tag you've added, or update your search engine results.

The result?

Your page will stay in the Google search results... forever.

Configuring robots.txt

For most Webflow sites, simply not having a robots.txt is your best approach. Webflow hosting doesn't support back-end programming, so there is no possibility of wild-and-crazy programming that you want to keep the robots clear of.

You're much better off keeping your robots.txt empty, and using <meta> tags to tell Google what things you want excluded from search results.

If you are determined to have a robots.txt, however, you need to know this.

How to Allow-all in robots.txt

The #1 mistake I see people make is that they misunderstand the robots.txt syntax, and end up blocking all robots from indexing their site.

If you want to Allow your entire site to be visited - and indexed or excluded based on your page-level META rules, this is the syntax you want;

User-Agent: *
Disallow:

Whtat this says is;

No matter the robot, do not block anything.

If you make the mistake of putting Disallow: / then you are telling robots that they are not permitted to explore any paths beginning with /. Which mean every path. Which means you have just blocked every page on your site from robots.

Yes you could want that... but most people invest in Webflow because they want great looking sites that are found by the World.

Don't shut the world out, accidentally.

Google syntax

Google also provides for an Allow keyword, which is a bit more comprehendible. However it's non-standard, and will be ignored by other robots.

User-Agent: *
Allow: /

I'd recommend sticking with standards, whenever possible. Google likes them, too.

More Tools

When in doubt, test.

Google has a built-in robots.txt testing tool that you can use on your Search Console verified properties.

See Google's docs also.

Videos
No items found.
Table of Contents
Comments
Did we just make your life better?
Passion drives our long hours and late nights supporting the Webflow community. Click the button to show your love.