The writing style seems a little unnatural, but the odd grammatical error convinced me that it wasn't the result of someone asking an LLM to review the libraries and write the reviews in the voice of an intellectual who went to Harvard.
What a world we live in, that suspecting an LLM guided by a specific prompt would be my first instinct.
I firmly believe that the quality of HN comments is made worse by people complaining about LLM generated content than by the LLM generated content itself.
At least the LLMs are contributing to the discussion.
If people generally thought the LLMs were contributing anything of value, then the high volume of comments against them that you're describing wouldn't exist. Instead, LLMs are contributing bad content and also the downstream criticism on top of it.
On the one hand, I agree that LLMs wherever perceptibly used do nothing to aid legibility, and much to hamper it. That is legitimately irritating.
On the other, it isn't at all new, is it? How LLMs write best, or at least how they write most, is just an outgrowth of the same methylphenidate style that's characterized online writing broadly construed since the days of the original Buzzfeed, which might as well have been called "Slopchute" if we were using those words that way then. Certainly it more than any other one source is responsible for the decay of cultural discourse that made the current troubles first possible and then inevitable - especially thanks to the huge volume of such useless crap (and its worse imitators) in these models' training sets.
I would certainly like less of the slop, as much as anyone. On the other hand, it's surprising to me at this late date to encounter people who read a lot online, and have not become accustomed to dealing with wordy junk written by Adderall casualties - that is, accustomed to dispassionately filleting a longform article on sight, skimming and glancing back and forth to identify what thesis may be present if any, and only actually settling in to read sequentially in the uncommon case where something initially mistaken for "content" has proven to be worth that level of effort.
It's surprising to me because I expect people to respect the value of their own interested attention, and not permit it be idly wasted. Sometimes someone has something worthwhile to say, but not the skill to do a competent job of actually saying it, and so the reader is required to meet the writer considerably more than halfway. I described above what that process looks like in practice. It isn't really something I tried to learn, just something I began doing out of frustration with having my time wasted. (Is that unusual? A little while back someone here had to explain to me, with obviously strained patience, that most people experience pleasure as a direct effect of opiates, and not only as a side effect of the sudden surcease of pain. That clarified for me why so many people get hooked so easily, but it also suggests I may not be the best judge of what's "normal" in these matters, I suppose.)
In terms of difference in practice, LLM output is a little wordier, a little more of a slurry, sure - but on the other hand, precisely because the results tend to exhibit such a strong or "pattern language" form of stereotypy, I find it's actually often simpler to dissect a large quantity of LLM output for the sentence or two of actual thought underlying it, than to do the same with something of similar length which was written by a human, whose paragraphs will almost never be instantly dismissible en bloc, the way most LLM-output paragraphs are.
I suppose that last may sound distasteful, but consider: the paragraphs we're discussing, wherever originating, are filler and that's why we don't like their presence. These paragraphs have been filler since this was The Atlantic's unique house style back when that was still a real magazine, and these paragraphs were never not going to be anything but filler, so whether they were excreted by a human or a robot has nothing to say about the artistic quality of what we've already agreed, indeed taken as axiomatic, is not art. It's styrofoam! It's packing material, which we were never going to care more about than the minimal effort required to throw it away. So why care all that much whether it's hand-blown or machine-extruded?
> ...the odd grammatical error convinced me that it wasn't the result of someone asking an LLM...
That's easily solved by models intentionally introducing the odd grammatical error here and there, just enough to convince the sceptics, not so many as to give the impression of being unlettered. A bit like the mythical 'RHS button' (which stands for 'real human shitty' but in reality is called the 'Shuffle' or 'Swing' function) which is supposed to make mechanically-precise drum machines sound more like human drummers.
As straw men go, this is an attractive one, but...
When I was fresh out of undergrad, joining a new lab, I followed a similar arc. I made mistakes, I took the wrong lessons from grad student code that came before mine, I used the wrong plotting libraries, I hijacked python's module import logic to embed a new language in its bytecode. These were all avoidable mistakes and I didn't learn anything except that I should have asked for help. Others in my lab, who were less self-reliant, asked for and got help avoiding the kinds of mistakes I confidently made.
With 15 more years of experience, I can see in hindsight that I should have asked for help more frequently because I spent more time learning what not to do than learning the right things.
If I had Claude Code, would I have made the same mistakes? Absolutely not! Would I have asked it to summarize research papers for me and to essentially think for me? Absolutely not!
My mother, an English professor, levies similar accusations about the students of today, and how they let models think for them. It's genuinely concerning, of course, but I can't help but think that this phenomenon occurs because learning institutions have not adjusted to the new technology.
If the goal is to produce scientists, PIs are going to need to stop complaining and figure out how to produce scientists who learn the skills that I did even when LLMs are available. Frankly I don't see how LLMs are different from asking other lab members for help, except that LLMs have infinite patience and don't have their own research that needs doing.
AI does not give you knowledge. It magnifies both intelligence and stupidity with zero bias towards either. If you were above average intelligent then you may be able to do a little bit more than before assuming you were trained before AI came along. And if you were not so smart then you will be able to make larger messes.
The problem, and I think the article indirectly points at that, is that the next generation to come along won't learn to think for themselves first. So they will on average end up on the 'B' track rather than that they will be able to develop their intelligence. I see this happening with the kids my kids hang out with. They don't want to understand anything because the AI can do that for them, or so they believe. They don't see that if you don't learn to think about smaller problems that the larger ones will be completely out of reach.
Maybe the solution is for an AI that acts as an instructor instead of just trying to solve everything itself. I do this with my kids, they ask me how to do something. I will give them hints, but not outright do it all for them. The article writer in the first part mentioned that this is how they would instruct too.
I recently heard that a professor said to the class, you can use an ai to solve the assignments. However I'll see if you really understand the material on the final exam.
Students are given student-level problem, not because someone wants the result, but because they can learn how solving problems works. Solving those easy problems with LLM does not help anyone.
Here I was thinking this article would tell me how to turn my unmanaged switches into routers, but no, "anything" actually means "any fully featured general purpose computer with networking".
That's theoretically possible but a bad idea for a managed switch, because they seldom have enough CPU performance or IO between the CPU and switch silicon to provide respectable routing performance. For an unmanaged switch, it's more likely that whatever CPU core is present (if any) doesn't have enough resources to run a real network stack.
Something many may not know is that beyond his own novels, Tracy was also deeply involved in Jonathan Harr's book, "A Civil Action." He and Harr were friends, and he told Harr about the courtroom case. Later, when Harr would get stuck, he worked with Harr to edit and give feedback on his drafts.
He always spoke more about "Mountains Beyond Mountains" than his other works, I think because of what he had to endure to write it. It caused him severe illness and health problems due to the locations he had to go to.
He was extremely proud of the other work he did, like "Mountains Beyond Mountains," but I'll always remember the bookcase where he kept every edition of "The Soul of a New Machine" in every language it was printed in. I think seeing that his work was worth being translated into so many languages was for him the biggest achievement of all.
I feel like I've been waiting for this to mature for a decade. I love that the vision has been realized despite the enthusiasm for functional programming languages cooling off somewhat.
That's actually brilliant! Most of my classes only taught what tools were needed to accomplish coursework, not generally useful tools. Even our OS class focused on the workings of the kernel, not the Unix philosophy and how it influenced what tools were included, and how to use them. Then again, 20 years ago the year of the linux desktop was much farther away than it is today...
What a world we live in, that suspecting an LLM guided by a specific prompt would be my first instinct.
reply