> Commercial entities that host "a substantial portion of material harmful to minors" are now prohibited from facilitating or encouraging the use of a VPN to bypass age checks.
LLMs often have a distinct writing style. It's not guaranteed, you can get false positives and false negatives, but if you start paying attention it becomes obvious in many cases.
But you can tell it to use different styles. To be formal or in-formal, to insert colloquialisms or to remove.
People are depending on their own 'gut-sense' a lot, and not realizing they are really not correct.
If you think all it takes is paying attention, then you are missing it. It's both more widely used than assumed, and also now obscuring what is non-AI.
> But you can tell it to use different styles. To be formal or in-formal, to insert colloquialisms or to remove.
And when you get it right, the result doesn't get called AI generated.
> People are depending on their own 'gut-sense' a lot, and not realizing they are really not correct.
TFA is very obvious about it.
A human who writes like this should be ashamed to do so, and should endeavour to understand why the writing comes across as "generic LLM"-like and fix it.
We have reached a point where people can end up training their writing on generic LLM output. This is a bad thing, because it's bad output.
Even beyond any clues from writing style, the general presentation is bad. It presents far too many facts and figures without giving anyone a good reason to care about most of them. And then it ends with a section on a separate topic (how to choose a lab, rather than how they're distributed across the world).
Most importantly, though, the submission is presented with a different title that implies a different purpose to the article that is not elaborated in the article. I would have expected personal insight a) on why people should care about the FCC's action (there is no mention of that action at all); b) on what the process was like of collecting this data. And I would have expected, you know, mapping of the lab locations rather than bar charts giving geographic breakdowns.
That obviously won't be true for much longer, assuming it's still true now, which I doubt. If you're an LLM content farmer, how hard could it possibly be to LoRA your way out of generating cliches like emdashes, 'You're absolutely right!' and 'It's not A, but B' rhetoric?
Don't beat yourself over it. It's the new sport for HN upvote farmers to default to calling out any TLDR post that got "delve" in it or some other cliché as LLM Slop. I also think it's a waste of time. What's important is the content. Is the content of the article valuable? No? Just close it and move on. But we know the incentives to get a few upvotes is just much too good to pass...
This article goes ham on the rule of threes, it does the "not just x, but y" cliche, em-dash with spaces on either side, bold heading-sentence paragraphs, it visibly has hallmarks of AI driven writing.
If you personally can't tell then just say that rather than casting aspersions on everyone else by claiming they can't.
No human* would waste the time to write a piece that is both highly polished while being so long that any useful information is spread so thinly it is essentially empty. This is how people "can tell" if it is written by AI.
Not a dig at this author by the way or saying it applies to this post, just in general.
*or if they did anyway, the result is the same: bad writing.
> a piece that is both long and highly polished while being devoid of useful information
Idk, I learned a little bit about our regulatory system, that a lot of these labs are in China and that those are now banned (and that the ones in India may be next).
The style is admittedly annoying. But I'm glad the author put in the work to highlight something they, and now I through them, found interesting.
No human would waste the time to write a piece that is both highly polished while being so long that any useful information is spread so thinly it is essentially empty.
LOL, some of us spent 12 years in public schools refining this very art to perfection.
From article " Where the next Programming Language will come from? that beautifully described the sad state of things. His main point is that the incentives for programming language innovation are at best misaligned and at worst non-existent"
Ok. Zig is great. But wont it still suffer from same headwinds as every other 'better' language. That industry wont adapt it? They have to much installed base and just want to hire Java/C#/etc...
We'd like to think this could turn into the voice interface on Star Trek.
But
It can go the other way also, 'incantations', 'spell books'. Speaking to the void to produce magic.
"The CFO, donned the purple robes, and spoke the spell of Increased Productivity, and then waved his hands symbolizing the reduction in work force labor. And behold the new ERP/SAP App was produced from the void. But it was corrupted by dark magic, and the ERP/SAP App swallowed him and he was digested. The workforce that remained rejoiced and danced"
Doesn't this still presume that we understand our own consciousness, in order to make the comparison.
Where does our survival instinct come from? And why couldn't AI have one?
>>>Additional
Also, reproduction.
Humans are basically just Food, Sex, Survival. And consciousness is just a rule set for fulfilling those goals. So if a NN, modeled on US, does develop the same rules, why can't it have the same degree of consciousness. Who says we are consciousness?
Just wondering, once an 'AI Model of Some Form', is in a Physical Body a 'robot', and is provided with some rules about survival so it doesn't fall into a hole. After a series of these events, does it matter? Does mimicry become reality, or no longer differentiable.
Kind of the philosophical zombie argument. If a robot can perfectly mimic a human, can you really know the internal state of the 'real' one is different from the 'mimicked' one.
The paper isn't concerned specifically with survival. It's saying that you cannot achieve "abstraction" (presumably the structure that underlies critical thinking, creativity, etc.) through shear mimicry.
Again, just echoing the paper here. I don't know that I'm doing it justice.
If AI has a survival instinct, then we should theoretically see evidence of it if we construct the right environment for AI to express it. Animals and cellular organisms demonstrate a survival instinct under the right conditions, so we would have to find equivalent conditions for a hypothetical machine intelligence.
Conversely, we know that if we take animals that do have a survival instinct and put them into the wrong kinds of environments, they will not thrive and will degenerate or possibly commit suicide. Similarly, if AI did have a survival instinct, do we think we've created an environment where that could be reasonably tested and observed?
I can make an AI system with a survival instinct right now. Of course, all that will do is make people tell me “it’s not a proper survival instinct” or move the goal posts and tell me I need yet some other property.
This whole endeavor is doomed from the beginning. There is no crucial test for “consciousness”, just ad hoc criteria people come up with to land on the conclusions that leave their belief system intact.
Consciousness is not a concept that can be rendered operational.
I can make state machine that acts like it has survival instinct. But it certainly isn't something we would consider conscious. So I am not exactly sure how good most tests are.
There are plenty of people that say AI has already displayed a survival instinct, by threatening users if they talk about shutting it down. Or to use a market or blackmail, to get funds to source an external machine to run on.
There are bunch of articles proclaiming AI is trying to break out. Can't find a real study on it.
Asking humans to discuss consciousness is like asking Super Mario to discuss screen pixels. We have no freaking idea. Everyone on all sides, physicalists, idealists, and everything in between are all full of it.
Indeed interesting. Seems that his theories are a particular strain of idealism. I probably lean more towards idealism than physicalism but I don't think it's the whole picture. It's still missing something.
Sorry, maybe I should have quoted the next line as well:
> Pabst echoes that advice: “My recommendation for people at home, without knowing anything they are doing, 90% chance that if you use less coffee and grind a little coarser [your coffee] will actually taste better.”
So it's not just about consistency, but also quality.
"taste better" does not mean quality either. What do I know about their tastes, they're scientists not baristas (in the article baristas were only asked about process options). Also they didn't discover anything new, just confirmed what everybody was telling them. And not at least, there are different methods of making coffee, while they smeared their espresso machine results interpretation over everything - like for instance to make Turkish coffee (aka pot) you must grind it the finest and use more.
Reproducible is necessary but not sufficient for consistently good coffee. If you can’t reproducible what you did, you aren’t able to make changes to improve over time.
This is why I think the Aiden is underrated. It way more consistent than I was when doing pour over but still lets me tweak variables.
Good is totally up to the person's tastes, anyway. Turbo style shots are the end-all-be-all for a lot of people who enjoy espresso. For other people, they hate it, for a multitude of reasons.
A pet peeve of mine is when people mention "weak" coffee. What does this mean?
So if I have jo-blow web site.
And a user uses a VPN, how am I supposed to do anything about it. And why should i?
reply