Unfortunately this is an artificial problem created by google and others as a result of them not adhering to the html spec. However, I realize this is a problem nonetheless so I’m working on a workaround until they fix their bug (they fixed this bug in their robots.txt parser back in 2007 so maybe they’ll get around to fixing it in their sitemap parser someday.)
Anyway, the current solution I’m leaning towards is to provide a means for users to enter a list of urls or url wildcards that shouldn’t be processed by my plugin. Basically you could add /*sitemap.xml to a configuration page for my plugin and it will leave urls on those matching pages alone. This would also solve some edge case problems with rss and atom feeds too. But I’m not 100% sure this is the solution I want to take so I’m still working it out on my end.
Also, this feature is mostly designed for migrating between staging and production hosts. An immediate work around is to use the plugin for development and content updates in staging and disable it in production.