It’s a shame it’s now so much easier for bullshitters to produce bullshit quickly, but it is still irrelevant to whether a given piece of work is good or not.
It's relevant because AI allows you to work faster and in larger volumes, and pushes quality down in the process. Because the user will optimize for quantity, not quality (which wouldn't be a viable choice in the absence of AI)
And even if further advances in ML can improve the "writing quality", overall quality is a much more multidimensional thing, and being able to produce a convincing-sounding review (formatted correctly, talks about actual content within the review, etc) is not the same as giving a useful one. As another comment in this subthread noted, if the author feels that an LLM-generated review is worthwhile, they can feed it to one themselves—and it's entirely possible that a specially-trained LLM could give some halfway decent reviews of some basic things like spelling and grammar, missing information or sections, that sort of thing, simply based on previous article drafts and their reviews.
We should not be predicating our concerns about LLM-generated content solely on its "quality", because ultimately, the problem with it is that it is generic. I think it unlikely that it will have the ability to produce a genuine and thoughtful critique of a journal article until and unless there are significant breakthroughs, possibly even to the level of achieving AGI or something like it. Even using a more-advanced review-specific LLM like I describe above more widely does present serious concerns, because it runs the risk of suppressing articles that deviate from the "norm" in ways that the LLM doesn't have any way to appreciate, but which can present the findings better or even make the science better.
If you don't think a lot of people don't already optimize for quantity, I have a bridge to sell you.
I do get the point that LLMs make producing crap easier but that's somewhat independent of LLMs being used generally--which is going to happen in any case.
Honestly, I don't think it's viable to manually optimize for quantity in academic paper reviewing, specifically. But I might be wrong, of course. I think it's too much work for very little profit.
Basically, the only incentives are to slightly improve your resume by showing you are a reviewer for reputable journals, and to get fee waivers for publishing your own work in said journal (in crappy journals, usually). But you if you review a lot, you may get selected to be an editor and then climb the ladder from there, to be editor in a better journal, or become editor-in-chief, all of which can be prestigious (and paid) positions.
You seem to be arguing against some other point I have not made. I said it is irrelevant to whether a given piece of work is good or not. If a given piece of writing is good, it’s good regardless of what tools the writer used.
If however a given piece of work is good but is produced with a tool that drives down the average quality of the field it may be reasonable to ask that people not use the tool, even if they produce good work with it
This is an instance of the "99% accurate test says you have an incredibly rare disease" fallacy.
As the percentage of garbage that goes into peer review (or any other filter) increases, the percentage of garbage that manages to sneak through will increase.