My live test in the URL inspection says “Failed: Robots.txt unreachable”. I can reach it with the browser, and if I change my UserAgent to GoogleBot, I can still see it. Not sure what’s wrong. I saw this was a previous issue on Git, so hoping for a resolution on here!
might be causing an issue if it puts the wildcard over everything else, googles error could just be wrong, I can’t think of a reason why it would fail one it’s available, you could try removing everything and setting up the policy around just the general user agent and see if it works
I recommend checking your Runtime Logs and you might see that locale information (accept-language header) is missing, which I believe Googlebot also doesn’t send in this case and this is causing the crawling to fail.
However, once you add this header to the request, actual content is returned. The recommended mitigation is to not only rely on the accept-language header rather than having a fallback if this header is not present: