Hacker Newsnew | past | comments | ask | show | jobs | submit | notJim's commentslogin

This was the same logic that was used when building nuclear weapons, and many of the scientists involved in that tried to find a different path (most notably Niels Bohr). I think we would be in a much better world if they had been successful. It's good that we're trying again w/ LLMs.

They also removed the words build, develop, deploy, and technology, indicating that they're no longer a tech company and don't make products anymore. Wonder what they're all gonna do now?

/s


Could also be great for maintenance dosing. I'm reaching the end of my ~weight loss journey~, and it's not a sure thing that the insurance company will continue paying for the injections once I'm no longer overweight.

I'm definitely willing to keep taking it. If insurance won't pay for it, I could pay for the pill out of pocket if I had to, which would be cost prohibitive for the injections.


Every artist and creator of anything learned by engaging with other people's work. I see training AI as basically the same thing. Instead of training an organic mind, it's just training a neural network. If it reproduces works that are too similar to the original, that's obviously an issue, but that's the same as human artists.


Human beings are human beings.

For profit products are for profit products, that are required to compensate if they are derivative of other works (in this case, there would be no AI product without the upstream training data, which checks the flag that it's derivative).

If you would like to change the laws, ok. But simply breaking them and saying 'but the machine is like a person' is still... just breaking the laws and stealing.


This is a bad-faith argument, but even if I were to indulge it: human artists can/do get sued for mimicing the works of others for profit, which AI precisely does. Secondly, many of the works in question have explicit copyright terms that prohibit derivative works. They have built a multi-billion dollar industry on scaled theft. I don't see a more charitable interpretation.


You can't call something a bad-faith argument just because you disagree with it. I mean, you can, but it's not at all convincing.

As I said, if AI companies reproduce copyrighted works, they should be sued, just like a human artist would be. I haven't experienced that in my interactions with LLMs, but I've never really tried to achieve that result either. I don't really pirate anymore, but torrents are a much easier and cheaper way to do copyright infringement than using an AI tool.


LLMs don’t have to be able to mimic things. And go ahead and sue OpenAI and Anthropic! It won’t bother me at all. Fleece those guys. Take their money. It won’t stop LLMs, even if we bankrupted OpenAI and Anthropic.


> I see training AI as basically the same thing

Of course you do.


The navy comment is a bit unfair, as it's well-known that Amazon is more of an airpower (hence "the cloud" etc.)


You think the SF parking enforcement agency is tracking everyone?! That's one of the wilder conspiracy theories I've heard recently.


Well, it's not the first time Germany has had a problem with Marx.


Marx had plenty of problems with Germany - or specifically Prussia - too... In Critique of the Gotha Programme, for example, he harangued the now SDP for arguing for free state provided education, on the basis that he argued it was rather the Prussian state that had a severe need of being educated by the people.

(He favoured a US-inspired model of public licensing, but privately run schools instead)


We are much greener though, at least in the West. Climate emissions peaked in Europe and North America in the last few decades (earlier in Europe.) In Europe, forests are growing back, because marginal agricultural land is being returned to forests as yields rise on prime land. I think this is beginning to happen in the US as well.

This doesn't mean climate change isn't a problem, because even with this progress, we're way behind and not moving nearly fast enough. But often it's the green side of the spectrum that's lying by catastrophizing and understating progress, while overstating the severity of what's happening.

It's happening similarly with AI, where the green movement has decided that AI is unacceptable, even though it has a tiny ecological footprint compared to activities like watching Netflix or eating nuts, let alone eating beef or flying on a plane.


That's right. So essentially we are in a deadlock where every side says "im only contributing fractionally to the problem", and nobody on Earth really has the full capability of blocking the activities you described from happening, especially not when there is good money to be made (e.g coal mining vs AI vs raising cows)

Doesn't seem like a bright future, but at least AI does have a chance of solving the problem while contributing to it. No other behavior could really say the same.


You're still missing it, we are not in a deadlock. Developed countries are in fact decarbonizing. China too is decarbonizing, though they're behind where the West is, but their goal is to peak their emissions by 2030.

In fact, it's kind of the opposite of what you say—everyone is contributing fractionally to the solution. This is what climate doomers miss.


Ok, so your opinion is that in X number of years, we may well hit some new level of decarbonization where we have severely contained or reversed the effects of climate change and so on, thanks to a relatively decentralized cooperation between all countries, even historical bad actors.

My position is that that is all theatre, that even if we do achieve that it will be temporary (nth industrial revolution, nuclear war, etc), and that we will eventually be the cause of our own worldwide collapse-- all while thinking we have control to the very end.


I mean, if you're that fatalistic, why worry about AI (or even climate change) in particular? If we're all just doomed no matter what, you may as well just enjoy what you can from life and not stress too much about any particular development.


Oh right change my mindset from despair to joy. Great move, let me just flick the switch.

Why feel sad when can feel happy. Me dummy.


China is not decarbonising. It's emissions are rocketing up.


Is the issue that the suggestions from the AI tool are not good, or is that bad code is making it into the repo?

The latter problem should be prevent by code review (first by the developer using the AI tool and then their teammates on a PR.) Code generated by AI should be reviewed no differently than code written by a human. If you wouldn't approve the PR if a person wrote this code, why would you approve it because an LLM wrote it? If your PR process is not catching these issues, you have a PR process problem, not an AI problem.

The former problem I have no idea.


Pretty troubling!


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: