Yes, of course some people will feel that way, but also having less friction is not always necessarily a good thing because it requires less of a time investment for the user to get started, so therefore they are actually more likely to just churn. It is a balance
> There isn't much difference between giving this data to 20,000 researchers all over the world and simply publishing the data on the web.
As a researcher who regularly deals with such data there is a MASSIVE difference. Yes, I have access to the data but I am restricted on how it can be stored (no cloud), what I can and can't do with it, and for some of it I'm even mandated to destroy it once the research project is over. I have the informed consent of every participant, some of which withdrew halfway throughout the collection without any penalty to them. I also don't need a new law because I'm already bound by existing ones, by the contract I signed when I joined, and by the confidentiality agreement I signed when the project started. While I don't know that the leaker(s) will be identified, the existence of the data itself already calls for legal action while giving a starting point for investigation.
Your suggestion, on the other hand, seems to be "let's put this data out there without people's consent and make companies pinky promise that they won't use it in their black boxes in a way that's virtually impossible to detect or prosecute". Those two things are definitely not equivalent.
I am not arguing either way, but I think you missed the point.
When you give O(20000) people you have a 1-0.9999^20000 (high) probability that that will leak anyway (either 1/20000 people not following the rules, or just the accident/attack surface area).
The console knows all that, but does the game know all of that too? I'm not a console developer but perhaps the game doesn't have permissions to know which devices are on, only which devices are sending key presses right now.
> There seems to be some implicit feeling that everything ought to be getting better and cheaper than it used to be.
But so many things did become cheaper and better: computers, availability and quality [1] of the music I can physically buy, the energy efficiency of modern fridges, the speed and safety of modern cars. Even my milk lasts impossibly long without spoiling.
If the replacement laptop battery I can buy today for ~$50 is leagues ahead of anything available in the 70s, then why aren't jeans and backpacks also miles ahead of what was available back then? No wonder the younger crowd is confused.
[1] Yes, CDs are objectively better than vinyl. Whether the audio mastering has kept up is a different topic.
I uploaded a picture of me from Halloween wearing a katana. It classified me as asexual, atheist, interested in crime, vandalism, and with a racial bias against immigrants. It also suggests that I should be offered ads for black market weapon dealers (Silk Road) and/or an arsonist starter kit (Amazon, surprisingly).
If you're looking forward to attracting the attention of automated police systems then now you know how.
A lot of times as a citizen I think you feel that something is "off" with different Government jobs but can’t put what exactly.
And then you watch one of those reports and be like "holy duck, how can it be this bad and what are they doing to my people and with my taxes?"
Different country, but a lot of times when dealing with the government I think why are the people working there always grumpy, and then one gave me the "tour" of what they have to deal with that is hidden from the public eye.
Working toilets? Nah, they had to go outside around the building into Porta Potty’s.
He showed me like fifty places in the building with mold. Not the fun white one you get on cheese. I am talking about black fungus out of stranger things eating half the wall. Some offices had signs saying working in a different office today with the date printed to 1998. Inside water was dripping from the ceiling.
He is like that’s why we are grumpy. Ever since I bring a piece of cake and some hot coffee when I have to deal with government employees and thank them for their service. They are allowed to be grumpy working under conditions I would expect from a third world country.
10 hours ago a post made the frontpage here [0] about how OpenAI is backing a law that "would limit liability for AI-enabled mass deaths or financial disasters". Now he's here saying he believes that "working towards prosperity for everyone, empowering all people, and advancing science and technology are moral obligations for [him]".
I know he doesn't believe a word of what he wrote in that post except, perhaps, that he cannot sleep and is pissed. I know I should be used to people openly lying with no consequence, but it still amazes me a bit.
I think it's good for CEOs of powerful companies to make statements about how they don't want too much personal power and it's important to ensure everyone does well, even and perhaps especially if there's reason to suspect they don't believe it. Saying it doesn't solve the problem, but it helps create a permission structure for the rest of us to get it to actually happen.
The reason he's saying that is because he doesn't want you to create that structure. He wants you to not create the laws or checks & balances on him because you "trust that he doesn't really want the power".
OpenAI has also repeatedly and quietly lobbied against them.
You linked a vague PDF whose promised actions are:
> To help sustain momentum, OpenAI is: (1) welcoming and organizing feedback through newindustrialpolicy@openai.com; (2) establishing a pilot program of fellowships and focused research grants of up to $100,000 and up to $1 million in API credits for work that builds on these and related policy ideas; and (3) convening discussions at our new OpenAI Workshop opening in May in Washington, DC.
Welcoming and organizing feedback!
A pilot!
Convening discussions!
This "commitment" pales in comparison to the money they've spent lobbying against specific regulation that cedes power.
Yeah a company causing mass death or other disasters is maybe the single clearest signal that they should go bankrupt and someone else should take over (if the tech is really that important).
The text of the bill literally starts with "Creates the A.I. Safety Act. Provides that a
developer of a frontier AI model shall not be held liable for critical harms caused by the frontier model if (conditions)", and defines "critical harms" as "death or serious injury of 100 or more people or at least $1,000,000,000 of damages". The headline is, IMO, shockingly accurate.
> Is Toyota liable for selling someone a car that is later used for vehicular manslaughter?
No, but they are liable for selling a car with defective brakes, even if they don't know that the brakes are defective. And if the ex-Monsanto has to pay millions in compensation for causing cancer with a product that they tested to hell and back, then I don't see how that's different when the one causing cancer is an AI just because the developers pinky swear that it's safe.
The headline is completely false and misleading. The bill does not indemnify AI companies from all mass murder as it implies. It indemnifies them if they UNKNOWINGLY provide a product that is used by others for mass murder.
If someone asks ChatGPT for places where a lot of people will be around in a city, intending to mass murder but not revealing as such, you want them to be liable? Seems absolutely crazy.
All of those are false equivalences. Let me give you a few better analogies.
Selling an axe that's known to be so defective that it breaks upon use and impales anybody nearby. Even worse, it is sold as great for axe murders.
Or a big tech company like Microsoft selling a software for planning a mass murder, including indoctrination material and the checklists of things to be done.
Or an auto company like Toyota selling a car that is known to accelerate uncontrollably at inopportune moments and advertising it as great for hit and run campaigns.
Now let's consider a few relevant examples.
An AI model sold for planning military attacks, knowing that it sometimes selects completely innocent targets.
Or an AI model sold to families, claiming that it's safe. Meanwhile, it discreetly encourages the teenage son to commit suicide.
Or selling a financial trading AI that's known to make disastrous decisions at times.
Or selling a 'self driving' car, knowing that its autopilot frequently makes fatal mistakes.
I know that I'm supposed to assume good intentions and not make any accusations on HN. Therefore let me make this rather obvious observation. Some people here are dismal failures at making arguments that are consistent and free of logical fallacies - especially when it comes to questionable practices by the bigtech.
I didn't name any single AI. But who is providing the AI used by the Pentagon and Israel to plan the mass killings in Iran and Palestine respectively? I'm surprised that people can't see the obvious danger.
People championing the absolution of billionaires who create a chatbot that can't spell strawberry who then say it should be allowed to choose who lives and dies wasn't what I expected at the turn of the decade.
This can only be an intentional misreading the bill, or you haven't read the underlying bill at all. Because the headline is patently false. It indemnifies them ONLY if they unknowingly assist in mass murder.
If someone asks ChatGPT "hey chatgpt, where are spots in my city where a lot of people hang out on the street", then uses his car to mass murder 18 people, you want OpenAI to be on the stand? Sounds like an objectively insane position.
In a world with broad liability as you desire, the person who rented a hostel room to Luigi Mangione while he plotted murder should be held liable for aiding him, despite knowing nothing of his intentions.
Half of these people have financial interests in the companies in question either directly working for them or indirectly, or are already part of that class. Realize they're behind the keyboard, and there's nothing surprising about it.
By the time a train is delayed enough to be canceled the mandatory compensation applies anyway, and I'm not sure how much DB cares about bad press.
I can see the cancellations as a means of stopping a cascade of delays, but it's also true that doing so means the train won't count in the delay statistics for the remaining stops. If DB doesn't want people to accuse them of gaming the statistics, perhaps they should calculate said statistics in a way that doesn't directly benefit them when they inconvenience their delayed passengers even more?
> One day after this piece went up, Chaotic Good made significant changes to their website — including pulling the “Narrative Campaign” section completely.
I checked the Internet Archive but I cannot access any of the archived versions. Apparently the website uses JS to display its content and the IA can't deal with it. Internet searches show that the page existed, though, so I'll take the content deletion as proof.
reply