Hacker Newsnew | past | comments | ask | show | jobs | submit | Karuma's commentslogin

Every single line in this article was written by an LLM...

Only the images? The entire article is pure AI slop. But no one even cares anymore... People seem to love this kind of empty text, or no one even reads anymore, or everyone here is a bot already... Who knows.


The article is not AI slop.

I spent 4 days on it and the video I made to go along with it with me speaking every word. The video has no AI, it is all stock video and audio footage which I pain stackenly stiched together in DaVinci Resolve.

I used AI to spell check and fix my ESL grammer in the article. Initially I also generated a number of unnecessary AI images which I removed again. I only left the ones that explain certain things like the p2p model.


Yeah, I've been just slowly blocking all these domains, users, etc. But nowadays it's just unbearable. We have already lost this war.

And seeing every day this kind of crap at the top of the front page of the websites I used to love, with hundreds of comments of intelligent people not even noticing all this useless AI slop... Very sad future ahead.


Yes, these people are so unbelievably stupid that think others more intelligent than them can't tell when they use AI to write their stuff. And then they act so annoyed when they get exposed... It's unbearable.

The article here is still full of AI slop, and so many people in the comments are defending the author. Blows my mind.


Thank you.

Every time I complain about this kind of useless AI slop I get downvoted to hell and get dozens of comments saying "it doesn't look AI at all", so I don't even bother anymore. It's incredibly sad, I expected much more from this community... But it looks like it'll soon be dead like the rest of the internet.


1984 is N°22 on that list...


Wow, every single word in the original post and on that README.md is pure LLM. How sad.

In any case, this has been done at least since the very first public releases of Llama by Meta... It also works for image models. There are even a few ComfyUI nodes that let you pick layers to duplicate on the fly, so you can test as many as you want really quickly.


Fair point on the writing style, I used Claude extensively on this project, including drafting. The experiments and ideas are mine though.

On the prior art: you're right that layer duplication has been explored before. What I think is new here is the systematic sweep toolkit + validation on standard benchmarks (lm-eval BBH, GSM8K, MBPP) showing exactly which 3 layers matter for which model. The Devstral logical deduction result (0.22→0.76) was a surprise to me.

If there are ComfyUI nodes that do this for image models, I'd love links, the "cognitive modes" finding (different duplication patterns that leads to different capability profiles from the same weights) might be even more interesting for diffusion models.


I only know of this one: https://github.com/shootthesound/comfyUI-Realtime-Lora. Haven't played with any layer manipulation though.


I was thinking more like this one: https://github.com/AdamNizol/ComfyUI-Anima-Enhancer/

"It adds the Anima Layer Replay Patcher, which can enhance fine detail and coherence by replaying selected internal blocks during denoising."


I tried out the one I linked with sd1.5 today, moved the sliders around like a total noob and got pretty bad results but I found no way to "replay" any of the layers like the one you linked, so thanks for the link. Must take a lot of trial & errors haha. I'll check it out, assuming it works for the anima preview 2 too.


Please, HN mods, no more ChatGPT articles...

I'd ask any LLM myself if I cared to read their empty words.


As I said in another thread here, I did not feel the article was LLM-generated, otherwise I wouldn't have posted it.

Perhaps I've been desensitized, or the LLM have crossed my BS-sensing threshold, but haven't yet crossed others' threshold.


FWiW there are (several) AI "vetting" bots out there, eg: https://tropes.fyi/vetter .. not my thing, I bookmarked that one from an earlier HN thread, that particular tool graded the article hard over and firmly in "AI Slop" (their term) territory.

https://tropes.fyi/vetter/4a753e67


Those don’t work. Test it by generating some ai slop (get a 100% fake score) and remove the top five tells. And it’ll get a 100% human score.

Repeat with your own writing. Insert a few mdashes and a few of their favorite turns of phrase. To go from 100% human rating to 100% ai.


A simple test I just did:

Me: What are some of Maradona's most notable achievements in football?

Mercury 2 (first sentence only): Dieadona’s most notable football achievements include:

Notice the spelling of "Dieadona" instead of "Maradona". Even any local 3B model can answer this question perfectly fine and instantly... Mercury 2 was so incredibly slow and full of these kinds of unforgivable mistakes.


That one looks extremely fake... No one would mistake サーカス as circuit, which is usually written as サーキット and is pronounced completely different. Also, calling the sport and everything around it as "the F1 circus" is very common in Japan and other parts of the world.


From TFA "All the artwork on the US games were corrected to say "Circuit", yet the original Taito PCBs are labeled "Circus"."

Perhaps the word "corrected" should be in quotes on that page then.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: