Hacker Newsnew | past | comments | ask | show | jobs | submit | shevy-java's commentslogin

This is actually a really great idea. There should also be universal terminals that people can access on public places or so, even without having a smartphone ready.

Now here in Germany we'll wait for decades for this to happen. For some reason Merz gave up on Germany.


But the responsible party is still the human who added the code. Not the tool that helped do so.

The practical concern of Linux developers regarding responsibility is not being able to ban the author, it's that the author should take ongoing care for his contribution.

That's not going to shield the Linux organization.

A DCO bearing a claim of original authorship (or assertion of other permitted use) isn't going to shield them entirely, but it can mitigate liability and damages.

Can it though? As far as I know this hasn’t been tested.

In a court case the responsibility party very well could be the Linux foundation because this is a foreseeable consequence of allowing AI contributions. There’s no reasonable way for a human to make such a guarantee while using AI generated code.

It’s not about the mechanism: responsibility is a social construct, it works the way people say that it works. If we all agree that a human can agree to bear the responsibility for AI outputs, and face any consequences resulting from those outputs, then that’s the whole shebang.

Sure we could change the law. It would be a stupid change to allow individuals, organizations, and companies to completely shield themselves from the consequences of risky behaviors (more than we already do) simply by assigning all liability to a fall guy.

What law exactly are you suggesting needs to be changed? How is this any different from what already happens right now, today?

Right now it's very easy not to infringe on copyrighted code if you write the code yourself. In the vast majority of cases if you infringed it's because you did something wrong that you could have prevented (in the case where you didn't do anything wrong, inducement creation is an affirmative defense against copyright infringement).

That is not the case when using AI generated code. There is no way to use it without the chance of introducing infringing code.

Because of that if you tell a user they can use AI generated code, and they introduce infringing code, that was a foreseeable outcome of your action. In the case where you are the owner of a company, or the head of an organization that benefits from contributors using AI code, your company or organization could be liable.


So it's a bit as if Linux Organization told its contributors you can bring in infringing code but you must agree you are liable for any infringement?

But if a lawsuit was later brought who would be sued? The individual author or the organization? In other words can an organization reduce its liability if it tells its employees "You can break the law as long as you agree you are solely responsible for such illegal actions?

It would seem to me that the employer would be liable if they "encourage" this way of working?


It’s a foreseeable outcome that humans might introduce copyrighted code into the kernel.

I think you’re looking for problems that don’t really exist here, you seem committed to an anti AI stance where none is justified.


A human has to willingly violate the law for that to happen though. There is no way for a human to use AI generated that doesn't have a chance of producing copyrighted code though. That's just expected.

If you don't think this is a problem take a look at the terms of the enterprise agreements from OpenAI and Anthropic. Companies recognize this is an issue and so they were forced to add an indemnification clause, explicitly saying they'll pay for any damages resulting in infringement lawsuits.


> Right now it's very easy not to infringe on copyrighted code if you write the code yourself.

Humans routinely produce code similar to or identical to existing copyrighted code without direct copying.


They don’t produce enough similar code to infringe frequently. And if they did independent creation is an affirmative defense to copyright infringement that likely doesn’t apply to LLMs since they have the demonstrated capability to produce code directly from their training set.

You have shifted from "very easy not to infringe" to "don't infringe frequently", which concedes the original point that humans can and do produce infringing code without intent.

On independent creation: you are conflating the tool with the user. The defense applies to whether the developer had access to the copyrighted work, not whether their tools did. A developer using an LLM did not access the training set directly, they used a synthesis tool. By your logic, any developer who has read GPL code on GitHub should lose independent creation defense because they have "demonstrated capability to produce code directly from" their memory.

LLM memorization/regurgitation is a documented failure mode, not normal operation (nor typical case). Training set contamination happens, but it is rare and considered a bug. Humans also occasionally reproduce code from memory: we do not deny them independent creation defense wholesale because of that capability!

In any case, the legal question is not settled, but the argument that LLM-assisted code categorically cannot qualify for independent creation defense creates a double standard that human-written code does not face.


> You have shifted from "very easy not to infringe" to "don't infringe frequently", which concedes the original point that humans can and do produce infringing code without intent.

Practically speaking humans do not produce code that would be found in court to be infringing without intent.

It is theoretically possible, but it is not something that a reasonable person would foresee as a potential consequence.

That’s the difference.

> LLM memorization/regurgitation is a documented failure mode, not normal operation (nor typical case).

Exactly. It is a documented failure mode that you as a user have no capacity to mitigate or to even be aware is happening.

Double standards are perfectly fine. LLMs are not conscious beings that deserve protection under the law.

>not settled.

What appears to likely be settled is that human authorship is required, so there’s no way that an LLM could qualify for independent creation.


And that's not an infringement. Actual copying is the infringement, not having the same code. The most likely way to have the same code is by copying, but it's not the only way.

In this case, the "fall guy" is the person who actually introduced the code in question into the codebase.

They wouldn't be some patsy that is around just to take blame, but the actual responsible party for the issue.


Imagine your a factory owner and you need a chemical delivered from across the country, but the chemical is dangerous and if the tanker truck drives faster than 50 miles per hour it has a 0.001% chance per mile of exploding.

You hire an independent contractor and tell him that he can drive 60 miles per hour if he wants to but if it explodes he accepts responsibility.

He does and it explodes killing 10 people. If the family of those 10 people has evidence you created the conditions to cause the explosion in order to benefit your company, you're probably going to lose in civil court.

Linus benefits from the increase velocity of people using AI. He doesn't get to put all the liability on the people contributing.


Cool analogy! Which has nothing to do with the topic in hand.

That is a nonsensical analogy on multiple levels, and doesn't even support your own argument.

Nice rebuttal.

Why would I put much effort into responding to a post like yours, which makes no sense and just shows that you don't understand what you're talking about?

Why would you put any effort into it at all?

Responsibility is an objective fact, not just some arbitrary social convention. What we can agree or disagree about is where it rests, but that's a matter of inference, an inference can be more or less correct. We might assign certain people certain responsibilities before the fact, but that's to charge them with the care of some good, not to blame them for things before they were charged with their care.

Because contributions to Linux are meticulously attributed to, and remain property of, their authors, those authors bear ultimate responsibility. If Fred Foobar sends patches to the kernel that, as it turns out, contain copyrighted code, then provided upstream maintainers did reasonable due diligence the court will go after Fred Foobar for damages, and quite likely demand that the kernel organization no longer distribute copies of the kernel with Fred's code in it.

Anyone distributing infringing material can be liable, and it’s unlikely that this technicality will actually would shield anyone.

Anyone who thinks they have a strong infringement case isn’t going to stop at the guy who authored the code, they’re going to go after anyone with deep pockets with a good chance of winning.


> Anyone distributing infringing material can be liable

There is still the "mens rea" principle. If you distribute infringing material unknowingly, it would very likely not result in any penalties.


Copyright is strict liability. There’s no mens rea required.

But why should AI then be attributed if it is merely a tool that is used?

Having an honesty based tag could be only way to monitor impact or get after a fix in code bases if things go south.

That is at the moment: - Nobody knows for sure what agents might add and their long term effects on codebases.

- It's at best unclear that AI content in a codebase can be reliably determined automatically.

- Even if it's not malicious, at least some of its contributions are likely to be deleterious and pass undetected by human review.


it makes sense to keep track of what model wrote what code to look for patterns, behaviors, etc.

This is a good point but I'd take it in the opposite direction from the implication, we should document which tools were used in general, it'd be a neat indicator of what people use.

AI tools can do the entire job from finding the problem, implementing and testing it.

It's different from the regular single purpose static tools.


It isn't?

> AI agents MUST NOT add Signed-off-by tags. Only humans can legally certify the Developer Certificate of Origin (DCO).

They mention an Assisted-by tag, but that also contains stuff like "clang-tidy". Surely you're not interpreting that as people "attributing" the work to the linter?


Fork the kernel!

Humans for humans!

Don't let skynet win!!!


> Fork the kernel!

pre "clanker-linux".

I am more intrigued by the inevitable Linux distro that will refuse any code that has AI contributions in it.


Tardux Linux

I always wondered when Planet of the Apes would begin. We can see it now:

a) Chimpanzees going to war. b) Humans ending humans.

Both is presently in the making, if one looks at the geopolitical scale and looks at damage caused by drones; a) is probably not yet full scale. Chimpanzees may be better diplomats than humans.


What's going on at Microsoft? Why did they suddenly declare war on VPN and related software projects?

Wouldn't comply with CIA backdoor requirements, but now they do ;)

> but I'm not sure the tradeoff is worth it.

Well corporations decide on that. I abandoned rubygems.org when they added the 100.000 download limit; past that point I was no longer able to remove old gem. Then came the new corporate laws for rubygems.org and mass-firing of about 8 open source developers who were involved with the ruby ecosystem.

We simply need to accept that corporations controlling an ecosystem can lead to HUGE problems. We need an alternative here. I don't have a good alternative either to suggest - money is influential. People adjust their behaviour and how they think with regards to money all the time. We could need some kind of model that also handles the economy. And, again - I have absolutely no clue how that could or should look like.


We need to create a special interest org for people that support general computing. I'm open to be part of something like this.[0]. Reach out to me if interested

[0]: https://scottRlarson.com


Well, Microsoft is evil so no surprise - but this seems like targeted censorship:

"The list of affected projects includes, but is not limited to, Virtual Private Network (VPN) software WireGuard, on-the-fly encryption (OTFE) utility VeraCrypt, the MemTest86 Random Access Memory (RAM) testing and diagnosis tool, and the Windscribe VPN software."

It seems to go against VPN right? Is there a connection to other things such as the mem-test tool? This one is the only one that does not fit here. Or perhaps we don't have the full picture.


It seems to go against developers of Windows drivers (which includes VPNs) - apparently there was a “mandatory account verification for all partners in the Windows Hardware Program who have not completed account verification since April 2024”, but for some reason it looks like no one notified these guys that they have to verify their accounts.

I wonder if they were compelled by someone in the government.

This is preemption, I believe, in the US for what's coming. Given the states trying to ram in "age verification" (mass surveillance propaganda, same agenda as CSAM) I no doubt believe that the only VPNs the USG wants people to have access to are corporate (easy entry point) and pwn'd VPNs [0] (in the media lately).

Fuck Microsoft (aka Microslop).

[0] https://www.wired.com/story/using-a-vpn-may-subject-you-to-n...


Awww. Oldschool bboy music in the 1990s and before.

Strange. I switched to Linux +25 years ago. My setup became quite minimal; right now I use IceWM for the most part. GNOME3 was always useless; KDE also changed since Nate "I need more moneys!" took over (see his donation daemon or the more recent "systemd-only" tied with wayland-only garbage that KDE succumbed to).

Linux is good in that you can combine things that work, so it is more flexible than windows. But desktop wise I don't see it becoming really dominant; GTK is now a GNOMEy-only toolkit. Qt is too busy focusing on their own business model. Desktop Linux is not useless, but it is really just sub-par compared to Windows. I also use Win10 on a second computer; I don't like it but I use it for testing. Linux lacks decision-making power focus (and corporations such as IBM/Red Hat are selfish, so these will never reach any "breakthrough" like the infamous Desktop of the Year, which I heard will come next year together with GNU Hurd ... I think).


> Desktop Linux is not useless, but it is really just sub-par compared to Windows.

Each to their own. My experience is the opposite (I use KDE). I have to use Windows at work and it's always such a pain. At least Windows 10/11 finally has multiple workspaces natively and some keyboard shortcuts for managing windows (ironic), but I would have preferred to stay in Windows 10.

Now Windows doesn't even support proper suspend anymore and it won't stay in the "modern standby" either. Constantly waking up and doing god knows what with fans screaming. When I take a look what it's doing, task manager claims that nothing resource intensive is going on. I'm guessing it's hiding some internal processes. It calms down when I put it to sleep again. Sorry for the rant, I better stop before I start.


yes the flaky sleep is what did it for me - laptop would randomly boot up at 2am, bright lights and whirring fans. Thought it was a virus! Seems like Fedora has cracked the hibernate/sleep issue, possibly due to good intel driver support for my Dell and finally Linux has better hibernate, sleep and wake than Windows 11 (ymmv!)

I actually have been lucky since even my laptop from 15 years ago already worked well with Linux and suspend while Windows didn't (wasn't OEM Windows anymore). I have also had multiple desktops that have _mostly_ had no issues with suspend either: only nvidia has given me grief on some setups when sometimes the screen would be blank when waking up, but I figured out workarounds for that.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: