> It seems like NGOs and IGOs have been pushing for internet restrictions for a long time. There has suddenly been a push for age restrictions allegedly because of abuse material. This happens annually.
we’re seeing some good evidence the most recent pushes were secretly funded and directly written by meta, the corporation. [0][1]
according to the link in there,
> Rep. Kim Carver (R-Bossier City), the sponsor of Louisiana's HB-570, publicly confirmed that a Meta lobbyist brought the legislative language directly to her.
and they’ve put as much as 2 billion dollars into it. and yes, that’s billion, with a B.
corporations openai, meta, and google were absolutely backing the push for the age verification bill in california and ohio. [2][3][4]
Reading the original research and stripping away the motives implied by the bot, the data is aligned with another interpretation. Namely that Meta is going with the flow and using the opportunity to push for regulation that impact its interests less, while affecting its competitors more.
The original research is riddled with baked in conclusions, and has not been verified independently. Its also mostly LLM generated.
> and they’ve put as much as 2 billion dollars into it. and yes, that’s billion, with a B.
The original report that cited the $2 billion number was AI generated slop. The $2 billion number wasn't from Meta, it was from Arabella Advisors.
The AI-generated report showed only about $20-30 million in lobbying efforts per year across all lobbying.
Even the Show HN post was full of AI slop, claiming things like "months of research" when the Claude-generated report showed it began a couple days prior.
So please stop repeating this AI generated junk. It dilutes any real story and the obvious falsehoods make it easy for critics to dismiss.
That’s on all lobbying efforts combined. It’s not out of line for a company of that scale that is trying to do things like build data centers and other such activities.
There’s a motte-and-bailey fallacy happening with that “Meta spent $2 billion” report where the $2 billion number is used as a hook but then replaced with a different argument if the other parties are observant enough to see that it’s BS
because we typically want to know the writer of a piece. we want to know where to lay credit.
every book you buy has an author credited. articles in newspapers and magazines have photographer and author attributions.
asking an ai to write you a story does not make you an author. if you ask someone to take a photo for you, you don’t magically get to say “look at this photograph, i’m a photographer.” if you ask someone to bake you a wedding cake, and then claim you baked it, you’re a fraud.
Because you need to do some pre-filtering on where to focus your attention, and you want to make sure the author put some thought into the article without having to analyze it.
Due to LLMs making the cost of publishing “thoughts” extremely low, there’s now an over-supply of content that looks decent on the surface, but in reality the author has probably spent less time on than the reader.
Are we ready so far down I to the LLM denial mindset that we consider an author spending multiple months crafting this to be "worthless" and less investment then your casual reading?
No, I believe this is a great post. It’s awesome. Even more so because it’s AI generated, as it shows what AI can do when given a lot of quality material to work with.
I’m just talking about the general topic about the usefulness of an “this is AI generated” classifier.
> general topic about the usefulness of an “this is AI generated” classifier.
exactly what i'm trying to get at too. And my thesis is that this classification method is pointless - it's just as pointless as saying things like "this author went to harvard", or "he/she came from a poor background".
Don't we already have these filters in place? I only saw this because it was highly-upvoted on HN, for example - I don't read every new submission. I also read things sent by friends and family, shared by curators I trust, etc.
Of course these systems may eventually break down, but for now they seem to work.
why does it bother you to give attribution? why do you think crediting the writer impacts how the piece stands?
we have pop musicians who produce massive hits under their names and the song writers are still given credit in liner notes and in the tracks details on spotify or wherever.
if it’s created by a bot, id take it even further and say which version of which model actually generated it should be declared. why would anyone be against giving proper attribution?
We like writing because the fact that we can create good writing says something about ourselves. If AI can create writing that surpasses, say, a Tolstoy or George Eliot, that will fundamentally change our self-perception. Is that a good thing or bad thing? Well, let's first cross the bridge of an LLM writing War & Peace and see how we feel.
If someone couldn't be bothered to write it, I certainly can't be bothered to read it. I did not bother to read the article involved because the continual piss stain on the images, the website itself, and a few key phrases let me on to the fact that it was all generated.
When you interact with art, you do so to interact with the author and the point they want to make. Writing is something where a skilled writer will be able to make a point tersely and have it stick, knowing where to embellish and where to keep it simple. Every decision in art tells you about the artist. Generative AI may be able to fake the composition process, but the point of composition is it reveals something about the human. All of those are artistic decisions that a machine apparently now "can do", but not with any coherency.
The holder of the reigns of slop is not an artist, this is plain to see because they do not interact or engage with their work on the same level as an artist. The produced slop is not art, because it cannot be engaged with on the same level.
Imagine if you had an auto cake making machine that decides on its own the best time to make cake. It adds the ingredients, stirs, turns the oven on, and leaves the finished cake on the counter for you.
People start opening bakeries consisting entirely of cakes baked by the automatic machines. The owners of these machines have no idea whether the cakes have a bit too much flour or were slightly over-stirred. In some cases, they haven't even tried the cakes.
Who gets to claim they made the cake?
By contrast, there are others who carefully tune their machines to make sure everything is perfect. They adjust the mixing settings and ingredient proportions. They experiment and iterate. They taste test throughout the process. And what they give to the public tastes every bit as good as a homemade cake.
The first group is creating slop. The second group, I think, is baking. And OP is in the second group.
Replace "oven" with a dish washer or a washing machine for your clothes. Those things do exactly all of this. Yet we still complain about washing clothes and doing the dishes, even though it is far less effort than anything our parents did, or their parents before them.
If you commission a baker, another person, with wants and desires of their own, is involved.
If you use an AI, there isn't.
Either way, it's clear that the author (yes, the author) put a lot of work into this by iterating and shaping it to what he wanted, and that's a lot more than sprinkles.
> If you commission a baker, another person, with wants and desires of their own, is involved.
> If you use an AI, there isn't.
What is the functional difference here? You are commissioning (see: prompting) someone (see: an AI) for a piece of work, or artwork or whatever. The output is out of your control; and I don't think the existence or lack thereof of a human on the other end materially matters.
If we had hyper-advanced ovens from The Jetsons where we could type a prompt using a fold-out keyboard and it would magically generate whatever cake we ask of it: did we or did we not bake that cake? And I do not think it is clear the author put a lot of work iterating and shaping it into what he wanted; we have zero insight into that.
I didn't say the difference was functional. If you don't think the presence of a human on the other end matters (materially or not), feel free to continue this conversation with an LLM simulation of me. You can even prompt it so that you logically triumph and convince "me".
I'm asking you to explain what the actual difference is and you're avoiding the question.
If we had a complete black box where you submitted Prompt and out came Thing, and you had zero clue what said black box actually did, could you claim creation over Thing? What does knowing that it's a human vs LLM make materially different in terms of whether or not you created it?
Why would I give him the same credit I would give a writer.
Or why would I give a writer the same credit I would give someone who created the AI prompts and scaffolding to generate this?
Being unhappy about not being able to call oneself an author, ends up betraying a lack of confidence in the work or process.
In the end writer, dancer, actor, whatever - these titles come from their impact.
There will be a different name for this, and eventually there will be something made that is good enough that people will be spell bound. At which point its going to be named something else.
Ironically, the story can be read as gesturing in that direction, as it's ostensibly about giving a new title to a particular job.
In general, though, I think part of the mistake people keep making is that they try to imitate what would be value to engage with if a human wrote it, in an attempt to claim the role of an author of a book or whatever. There's likely artforms that are unique to what an LLM can facilitate, but trying to imitate human artforms is going to give you stunted results. The AI is very good at imitating the form but not the substance.
Once we stop trying to generate and pass off AI essays, novels, choose your own adventure stories, and all the other human genres as being human writing, we'll have a chance to figure out actually interesting artistic forms.
> Creating something without the effort previous works involved, can and do affect the context and understanding of it
not really. Unless you place value on _effort_, rather than be objectively outcome based. Someone digging a hole with a spoon doesn't make it a better hole than a jackhammer.
I maintain that the work itself - that is, the contents of what is being expressed - is the sole judgement of how good the works is. Not the authorship, LLM-usage or otherwise.
The context exists whether it's LLM generated or not, because the context sits broadly in society, culture, and manifests in the mind of the reader.
> how would LLMs fair when the content of the work itself is about “Something made by a human”.
it would fair just as well as if the same words had been written by a human, provided the contents are sound and has good meaning - conversely, slop is slop, regardless if it was written by an LLM or human.
My point at the grandparent post is that there's a lot of blind discrimination on the origin of a works - if it was written by or with the help of LLM, then it automatically deserves less attention, and/or its content's worth diminished. All without actually discussing the content.
Largely, I agree with you. One famous counterpoint about labeling works of arts with the author: The Economist (the magazine) does not add the author to most of their articles.
> because we typically want to know the writer of a piece. we want to know where to lay credit.
Does the average person really do care all the time? Maybe the outlet it comes from as a whole (factuality, political lean) but more rarely the exact author. Many don’t even have the critical skills for any of it and consume whatever content is chosen for them by whatever algorithm is there. We probably should care, I just don’t think a lot of us do.
For me, needing to know that something’s written by AI serves threefold purposes:
1) acknowledging that it might be slop that someone threw together with no effort (important in regards to spam)
2) acknowledging that depending on the model the factuality might be low when it comes to anything niche (though people are wrong too, often enough)
3) mentally preparing myself for AI bullshit slop language, like “It’s not X, it’s Y.”, or just choose not to engage with it (it's the same disgust reaction as when I find a PDF and realize it's just scanned images, not proper text)
In general, unless the goal is either human interaction or a somewhat rare case of wanting to read a specific blog etc., most of the time I don’t categorically care whether something was lovingly created by a human or shoved out by a half baked version of Skynet - only that it’s good enough for whatever metrics I want to evaluate it by. I’m not ashamed of it and maybe that’s why I don’t take an issue with AI generated code either, as long as it’s good enough (sometimes better than what people write, other times quite shit when the models and harnesses are bad).
In Peter Watt's Blindsight, the aliens understand language as spam, a hostile intent to waste their time, and respond by opening fire.
Reading LLM slop without warning makes me see their point of view.
I think there's useful ways to engage with LLM writing, but they are often very different than human writing.
A human writer, a good one, often has ideas that are denser than the words on the page, and close reading is rewarded by helping you unpack the many implications.
With AI writing, there's usually fewer ideas than words, and so it requires a different kind of engagement. Either the human prompter behind it didn't supply enough ideas, or they were noncommittal enough that their very indecision got baked in.
LLMs are very prone to hedging and circling around a point while not saying much of anything. Maybe it is the easiest way to respond to RLHF incentives and corporate-speak training data. Or maybe they're just intrinsically stuck on being unable to find the right next token so they just endlessly spiral around via all of the wrong ones. Either way, there's often a whole lot of cotton candy text that dissolves when you try to look at it more closely.
can't reply to your comment below so i will comment here
> why does it bother you to give attribution? why do you think crediting the writer impacts how the piece stands?
clearly it does to you?
thing is, this is a fool's errand to try to police what people credit when there is zero capability of verification and enforcement
the current social norms still value authorship, so people will just take or omit credit as they see most advantageous, even if it's merely an ego advantage, which it typically is but a proxy for brand building
what will happen if/when the currency of attribution is completely altered? hard to predict
my prediction is that track record will be considerably more important, not less, but human merit will be increasingly seen as irrelevant
> I can’t be the only one confused at these calls to have the government destroy things like searching the web, am I?
if you find this distressing then i imagine you find it equally as distressing as a couple of corporations destroy something.
the reason the word *enshittification” has become so ubiquitous is because corporations are actively destroying the internet and desperately trying to convince us the internet is separate from “the real world”.
sometimes stopping a person from burning the house down is necessary. no matter how loudly they cry about their freedom to have a bonfire in the living room.
how would a company respond if you had a bot do your job interview in your place? or do your rent applications?
they wouldn’t accept it.
growing up, my first job as a teenager at a restaurant that had ridiculous uniforms, i lasted about two months. i realized it irritated me that the owner would hang out at the restaurant in street clothes but expected us to look like little dancing monkeys. i quit and never worked another job where the owner asked us to do things they would never lower themselves to do.
i understand on the surface jt sounds petty, but it has proven to be a fairly strong indicator of how employees are treated.
if the people in power look at those who make them money as less than, if those in power expect others to jump through hoops they wouldn’t do themselves, it’s time to seriously reevaluate the situation.
i can’t speak for the journalists who wrote the story, but i assume it’s due to how prominently proton markets their email as safe/private/encrypted and then it turns out they may be sharing data with the swiss government who then gives it to the us government.
it absolutely should be news when the company who heavily promoted themselves to normies as safe, encrypted, and private is sharing customers data which is ending up in the hands of authoritarian foreign governments who are hunting for protesters.
This is a highly deceptive title. As if Proton proactively helped FBI, which is not even close to truth. Proton is not even in direct contact with FBI. It's Swiss government that forwarded the info to FBI.
A much better title would be:
Proton Mail Payment Info Helped FBI Unmask Anonymous 'Stop Cop City' Protester
Or
FBI Unmasked Anonymous 'Stop Cop City' Protester
via Proton Mail Payment Info
The point is informing the normies that your payment info is linked to your identity and a potential risk to your anonymity.
That clickbaity title makes me want to unsubscribe from their RSS feed.
> then it turns out they may be sharing data with the swiss government who then gives it to the us government.
Literally every legal business complies to law enforcement. They have to.
don’t worry, normal people will never ever see this data to use it against the powerful.
our public data can only be seen by billionaires and cops, not us.
it can be used against us, but never the other way around. the faster we realize this, the faster we can move out of our “divisive” phase and get back to making billionaires dreams come true.
Indeed - but as compute costs come down, it doesn't have to be that way - ultimately there are 8 billion eyes out there.
There are also public data examples - for example the public data on charter flights or ship locations had people like Elon Musk bleating about privacy.
many people i know personally who, to this day deny covid was real, they personally knew people who died or were hospitalized and ventilated. yet they still deny it was real.
one of my family members who was in a coma for over a month and in the hospital for months still denies it was covid despite multiple doctors telling him otherwise. some people live in a very real state of denial entirely separated from reality.
sadly i’m not sure the person you replied to is too far off.
Same here. The extreme politicization of the disease, plus the social isolation, plus over reliance on inflammatory social media as one's only channel to the outside world, fully broke some people's grip on reality. Permanently for some.
this doesn’t seem like a safe direction either.
reply