Viewing 2 replies - 1 through 2 (of 2 total)
  • Have you tried this:

    User-agent: Google
    Disallow:
    
    User-agent: *
    Disallow: /

    It’s not quite what you are looking for but my thought process would be disable the plug-in and remove the txt file. If thats OK add back this txt file which should specifically allow google. If thats OK add back your file and so on.

    I use this resource when stuck https://www.robotstxt.org.

    Thread Starter goZzee

    (@gozzee)

    Thank you Andrew,

    I did try that but to no avail. So I did a google fetch & got this result.

    Fetch as Google
    This is how Googlebot fetched the page.
    URL: https://www.nottingham-wedding-photographer.com/
    Date: Wednesday, March 20, 2013 at 9:57:37 AM PDT
    Googlebot Type: Web
    Download Time (in milliseconds):
    The page could not be crawled at this time because it is blocked by the most recent robots.txt file Googlebot downloaded. Note that if you recently updated the robots.txt file, it may take up to two days before it’s refreshed. You can find more information in the Help Center article about robots.txt.

    So it can take a few days to register the new robots.txt.

    Im so impatient ??

Viewing 2 replies - 1 through 2 (of 2 total)
  • The topic ‘robots.txt’ is closed to new replies.