Only the images? The entire article is pure AI slop. But no one even cares anymore... People seem to love this kind of empty text, or no one even reads anymore, or everyone here is a bot already... Who knows.
I spent 4 days on it and the video I made to go along with it with me speaking every word. The video has no AI, it is all stock video and audio footage which I pain stackenly stiched together in DaVinci Resolve.
I used AI to spell check and fix my ESL grammer in the article. Initially I also generated a number of unnecessary AI images which I removed again. I only left the ones that explain certain things like the p2p model.
Yeah, I've been just slowly blocking all these domains, users, etc. But nowadays it's just unbearable. We have already lost this war.
And seeing every day this kind of crap at the top of the front page of the websites I used to love, with hundreds of comments of intelligent people not even noticing all this useless AI slop... Very sad future ahead.
Yes, these people are so unbelievably stupid that think others more intelligent than them can't tell when they use AI to write their stuff. And then they act so annoyed when they get exposed... It's unbearable.
The article here is still full of AI slop, and so many people in the comments are defending the author. Blows my mind.
Every time I complain about this kind of useless AI slop I get downvoted to hell and get dozens of comments saying "it doesn't look AI at all", so I don't even bother anymore. It's incredibly sad, I expected much more from this community... But it looks like it'll soon be dead like the rest of the internet.
Wow, every single word in the original post and on that README.md is pure LLM. How sad.
In any case, this has been done at least since the very first public releases of Llama by Meta... It also works for image models. There are even a few ComfyUI nodes that let you pick layers to duplicate on the fly, so you can test as many as you want really quickly.
Fair point on the writing style, I used Claude extensively on this project, including drafting. The experiments and ideas are mine though.
On the prior art: you're right that layer duplication has been explored before. What I think is new here is the systematic sweep toolkit + validation on standard benchmarks (lm-eval BBH, GSM8K, MBPP) showing exactly which 3 layers matter for which model. The Devstral logical deduction result (0.22→0.76) was a surprise to me.
If there are ComfyUI nodes that do this for image models, I'd love links, the "cognitive modes" finding (different duplication patterns that leads to different capability profiles from the same weights) might be even more interesting for diffusion models.
I tried out the one I linked with sd1.5 today, moved the sliders around like a total noob and got pretty bad results but I found no way to "replay" any of the layers like the one you linked, so thanks for the link. Must take a lot of trial & errors haha. I'll check it out, assuming it works for the anima preview 2 too.
FWiW there are (several) AI "vetting" bots out there, eg: https://tropes.fyi/vetter .. not my thing, I bookmarked that one from an earlier HN thread, that particular tool graded the article hard over and firmly in "AI Slop" (their term) territory.
Me: What are some of Maradona's most notable achievements in football?
Mercury 2 (first sentence only): Dieadona’s most notable football achievements include:
Notice the spelling of "Dieadona" instead of "Maradona". Even any local 3B model can answer this question perfectly fine and instantly... Mercury 2 was so incredibly slow and full of these kinds of unforgivable mistakes.
That one looks extremely fake... No one would mistake サーカス as circuit, which is usually written as サーキット and is pronounced completely different. Also, calling the sport and everything around it as "the F1 circus" is very common in Japan and other parts of the world.
reply