Hello @webmakers2011
Thanks for sharing more information. First, I’ll mention that a?robots.txt?file provides?crawl directives, not?indexing directives. URLs blocked by?robots.txt?might still get indexed. Google may index a page it hasn’t crawled if there are links to it. The page ends up in the index, but Googlebot hasn’t crawled it, so Google does not know what it contains.
Your screenshot shows that the wishlist & cart-related pages are reported in the Google Search Console as Indexed, though blocked by robots.txt. According to Google’s coverage report:
Indexed, though blocked by robots.txt
The page was indexed despite being blocked by your website’s?robots.txt file. Google always respects robots.txt, but this doesn’t necessarily prevent indexing?if someone else links to your page. Google won’t request and crawl the page, but we can still index it, using the information from the page that links to your blocked page. Because of the robots.txt rule, any snippet shown in Google Search results for the page will probably be very limited.
Next steps:
1. If you?do?want to block this page from Google Search,?robots.txt is not the correct mechanism to avoid being indexed.?To avoid being indexed,?remove?the robots.txt block?and?use ‘noindex’
2. If you?do not?want to block this page,?update your robots.txt file to unblock your page. You can use the?robots.txt tester?to determine which rule is blocking this page.
In summary, since you have set the wish list and cart pages to noindex
, remove the crawling directive from the robots.txt file.
I hope that helps.