Anti-Crawler turned on causes W3C validator to fail.
-
When the blog of my sites are checked at the W3C Rss Feed validator at:
https://validator.w3.org/feed/check.cgi
If the Anti-Crawler is turned on, the result shows:
It looks like this is a web page, not a feed. I looked for a feed associated with this page, but couldn’t find one. Please enter the address of your feed to validate.
I would like to somehow whitelist this service but I do not know how to do so.
In the Anti-Crawler Description it says:
To enable/disable, open the Advanced settings, and turn on/off “Block by User-Agent”.
Yet when I open the advanced settings there is no option to turn on/off “Block by User-Agent”. I searched the Advanced settings and the words “User-Agent” do not appear. I love this plugin and it is very helpful but with much of the troubleshooting, whitelisting etc the tools are confusing and the documentation is just not good.
How can I fix this so that whatever bot that is being blocked from the W3C validator no longer gets blocked?
The page I need help with: [log in to see the link]
- The topic ‘Anti-Crawler turned on causes W3C validator to fail.’ is closed to new replies.