

What are the odds that you’re actually going to get a bounty out of it? Seems unlikely that an AI would hallucinate an actually correct bug.
Maybe the people doing this are much more optimistic about how useful LLMs are for this than I am but it’s possible that there’s some more malicious idea behind it.
I try to write comments whenever what the code isn’t obvious on its own. A “never write comments” proponent might argue that you should never write code that isn’t obvious on its own, but that doesn’t always work in practice