> That's why every chat box has that "Regenerate" button.
Wait, are you doing this in the web chat interface?!
That's definitely not a good way. You need to be using a harness (like Claude Code) where the agent can plan its work, explore the codebase, execute code, run tests, etc. With this sort of set up, your prompts can be short (like 1 to 5 sentences) and still get great results.
I use claud CLI or OpenCode. The "Regenerate" example is just to illustrate that same prompt would produce different output each time. You're rolling a dice.
But that's also basically true for humans. It's harder to "prove" humans are random, but wouldn't you think a person would do things slightly differently when given the same tasks but on different days? People change their minds a lot, it's just that there's no "reconsider" button for people so you feel a bit of social friction if you pester somebody to rethink an issue. But it's no different.
I'd be really surprised if your point is that humans, unlike AI, are super deterministic and that's why they are so much more trustworthy and smarter than AI...
It's a business with huge up-front capital expenses and typically very low margins. Supply is scaling up slowly because it's hard, and if you overshoot, you go out of business.
Nobody is "allowing" this. It's a natural property of being both advanced technology and a commodity at the same time.
The strange deals on the entire future output are what was allowed. Try to do the same thing with onions and the government understands you are a criminal.
That is quite the amusing read but it seems like a poorly constructed law. It wasn't futures themselves that were the problem there. The duo engaged in blatant market manipulation and severely disrupted part of the food supply in the process.
Cornering the market with the intent to flip the goods is not quite the same as cornering the market because you actually want the goods and intend to use them yourself.
And it just so happens that many people will now have to use OpenAI’s products because they can’t get enough RAM to run a local LLM. What a coincidence.
It has the makings of a natural monopoly, except its compounded by RAM cartels colluding to shut out the last of the competitors.
Recently they had a second price fixing lawsuit thrown out (in the US).
Now with the state of things I'm sure another lawsuit will arrive and be thrown out because the government will do anything to keep the AI bubble rolling and a price fixing suit will be a threat to national security, somehow. Obviously thats speculative and opinion but to be clear, people are allowing it. There are and more so were things that could be done.
> Supporting people that want more AI regulation to stop this?
How are you supposed to know what sort of regulation is needed if you don't even know what the issues are yet? Similarly, won't it be much easier to make the case for regulation if you can point to results of experiments like this one instead of just hypotheticals?
That's kind of the opposite problem -- the agent doesn't have robot arms or legs or a parcel of land. It has to rely on people to get access to land and plant and harvest the corn, and those people are ignoring it.
This is much less concerning to me than mass surveillance. If someone calls 911 and you need to send a first responder, why not send a drone to get there faster while a person is on their way?
Because today it will be used as a first responder.
Tomorrow a police officer will suggest that these drones (that we are already using successfully) could be very useful for checking up on that "dangerous" neighborhood.
In the city I live in, there is a chronic shortage of police officers and a lot of dangerous neighborhoods. If a drone could be used to do the same or a substantially-similar policing job in those neighborhoods that a human cop would, without having to pay for a human cop (not just their salary in and of itself, but also in terms of making the police department a place people are willing to work for at that salary), this would be an improvement to public safety and quality of life.
Also remote-operated drones don't need to fear that they will get suddenly shot or stabbed to death by a criminal suspect whose potential crimes they are investigating, like a human cop does; and this would itself have some beneficial effects on policing.
The "same or substantially-similar policing job" is the key to this argument. Which it can't. A drone can't de-escalate a tense conflict between neighbors, it can't provide traffic redirection after an accident, or even rescue a kitten from a tree.
It can't be a calm, reassuring presence, offer a kind smile, or give directions. It only disconnects the police force from the policed community. Its presence will only raise tensions and paranoia. And that's with unarmed drones!
Hi, I'm in Denver. They're already doing this over on Colfax. It's a significant change vs the existing halo cameras, because they use the drones to follow people.
I'm not really worried about the police. There's mountains of well reinforced legal precedent restricting them. Sure they have violence, but they kind of need to show up to do that. All the other stuff they do runs up against your rights which are really well established. Even the "civil" traffic stuff is pretty hard fought, comparatively.
Every other civil enforcer can basically fine you on a whim and then your appeal goes into a system that makes jim crow look impartial. So yeah, I'm not worried about the cops. I'm worried about the zoning office "fixing" a budget shortfall by fining people for unpermitted kiddie pools or whatever and in the 10yr it takes to get smacked down in court they'll have stolen the property of a ton of people. I'm dead serious. However bad you think it could be reality is worse. These non-LEO departments make the most sloppy podunk sheriff's office look like the FBI.
The actual problem is that there's a law requiring a permit for kiddie pools with a fine attached, that was mostly unenforced previously, so a lot of people got into the habit of breaking it.
If the drones are "providing information" to the police, it's only a matter of time before their AI hallucinates something that gets someone killed. We've already seen AI gun detection services that report things like Doritos bags as guns.
OTOH it will provide more surveillance of the police themselves. Humans are also bad at gun detection (sometimes willfully so) and this provides another check.
Not as often; it creates friction and requires cooperation from others (or an officer with unusual skill and access, presumably).
It will absolutely happen in corrupt departments, or those involving an officer with those skills and access. But data that is uploaded is infinitely harder to erase than simply turning off the camera in the first place.
That's right. And also just like the missing epstein footage.
Because it's a social problem, not a technology problem.
At the same time, just because these instances of "missing" tape happen, does not mean that body cams and jailhouse CCTV are useless. We would not take those away. Likewise for the future drone footage
How exactly does this provide more surveillance of the police themselves? I've done about ten FOIA lawsuits against police departments and it's laughable to think that they won't just lock footage away and exempt it from the public's eyes. Probably through a trade secret exemption because private companies are involved.
I don't trust the authorities to use information just for public safety and against legitimate criminal activity (in part because legitimate crime needs to be decided legally in court not just because of police suspicion).
There's too many examples where they've abused information for harassment, dubious arrests and prosecutions. And this can be systematic not just a few bad apples here and there.
We've already seen this with how ICE has conducted itself with more funding and surveillance.
You are giving those people the benefit of the doubt. It's been proven many many times that police will use "more information" to excuse their own decision to use violence. A decision that they already made well before the incident.
It's more "sus" that you blindly trust the police, politicians, and billionaires that have a history of discrimination, violence, and oppression and attempt to slander those who don't. Not to mention blindly trusting AI systems with someone's life - the only reason one would do that is because they either stand to profit from it or don't understand how they work. Are you really willing and eager to put your life in the hands of a piece of software that can't distinguish a gun and a Doritos bag?
Remember, oppression and invasion of privacy is still bad even if it isn't currently happening to you. If you think you can't be a target, you're sorely mistaken.
Yes. If you called from your cell phone while on foot or in your car, the drone can find your exact location and hover over you until help arrives, quicker than if EMS has to search you out themselves.
How so? I ask as a paramedic of 14 years, now retired.
If EMS has to "search you out" so does the drone.
At least in my County, we actually get very good triangulation info from 911. It was very rare that Dispatch told us they only had Level 2(IIRC) location info (which might be to several hundred feet).
FAR more common was people who actually told us the -wrong- location. Car accidents that were several miles up the road from their location. Saying Blah St SE when they meant Blah Rd NE, etc.
Drones don't solve for that problem. They're going to the wrong location, too.
> If EMS has to "search you out" so does the drone.
The point is that the drone is fast enough to arrive first, and do the searching so that you don't have to. It's just one of many possible scenarios.
I totally understand the argument that this might not be the most effective use of money, but I honestly don't understand the lack of appreciation for the number of places this could be used effectively.
Modern fire departments (including my own) are already using drones, and have found that the best use for them is not "how quickly can we find someone", but thermal imaging from above on structure fires.
> and do the searching so that you don't have to
The searching that we did just isn't really solved by drones (and I love them, some of my best photography is from a drone). It's things like "obscured house numbers on a street", "ambiguous address", not "person lost in a forest". Now if you want to talk about the use of drones for SAR? Absolutely. But for the vast majority of 911 "attempt to locate", getting there quickly is rarely the issue. We can get there quickly and still spend minutes figuring out that you're actually living on a flag property (where your home is behind another, but you share a driveway).
Yeah, I'm sure they wouldn't be helpful in every call. But the EMT user above talked about sometimes a caller giving the wrong location of a car accident. That's a clear case where a drone quickly on site could warn that crews need to be diverted elsewhere. But if it is just the occasional case where they'd be helpful, that makes them even less economically attractive.
As for fire services, in my city there is always a lead SUV vehicle (I think a captain or supervisor) who is a few blocks ahead of the actual heavy trucks. Presumably to get someone on site as quickly as possible; which made me believe that a drone could assist in that role. But I accept what you say, that there are too many limitations for it to help much, even if it can arrive quicker.
I want to see who is in a location. I get a plant to call 911, which triggers Flock drones in the general area and scans the faces of everyone it can find. I get that info from Flock.
There are always security concerns and exploits. Some crazy gamers call 911 swat attacks on people; that doesn't mean that the police shouldn't have guns, or that 911 should be turned off.
Yes, the drones should be secure. Yes there should be measures to make sure that they're not abused. But none of that takes away from anything i've said, which is ONLY to point out the situations where they could be useful. And people seem to be having a very negative visceral reaction to even considering the possibility.
Also, i'm not recommending or supporting Flock, just the concept of drone use in general.
Obviously I don't know the specifics of your city, but in general there are a lot of scenarios where it's valuable to get to a scene very quickly (no traffic, etc.) and obtain reconnaissance. Especially violent scenes, or it could even be a drunk driver who is still on the move, or a stolen car where the perpetrators are likely to flee on foot if stopped.
I'm sure you can come up with a lot more ideas using your imagination.
One of the best reasons is that a very large % of calls can be cleared without anyone actually going to the scene. Many cities using drones as first responders now report that they clear ~30% of calls with just a drone. This is great for small cities/towns that struggle to recruit officers and have had ballooning labor costs for police in order to get people to work there. Its also great philosophically if you want police to be involved less, because it dramatically lowers the amount of time they are going to scenes
There's more than one definition of missile. Florida criminal code's just one place where a drone could be considered a "missile".
Florida criminal code:
"790.19 Shooting into or throwing deadly missiles into dwellings, public or private buildings, occupied or not occupied; vessels, aircraft, buses, railroad cars, streetcars, or other vehicles.—Whoever, wantonly or maliciously, shoots at, within, or into, or throws any missile or hurls or projects a stone or other hard substance which would produce death or great bodily harm, at, within, or in any public or private building, occupied or unoccupied, or public or private bus or any train, locomotive, railway car, caboose, cable railway car, street railway car, monorail car, or vehicle of any kind which is being used or occupied by any person, or any boat, vessel, ship, or barge lying in or plying the waters of this state, or aircraft flying through the airspace of this state shall be guilty of a felony of the second degree, punishable as provided in s. 775.082, s. 775.083, or s. 775.084."
Yeah this doesn't bother me in any way, shape, or form. We already have manned aircraft that respond to such things, unmanned aircraft are a strictly better solution. It makes sense for police and it makes even more sense for fire. An aircraft can arrive at the site of a reported fire while firemen are still buckling their pants.
Likely: Scan everyone's home while en-route to the 911 call with an infrared camera. Or scan all of the license plates and faces of people along the way.
US law is more murky if they are responding to a perceived emergency. That can give probable cause. Imagine someone calls 911 due to smoke coming out of the neighbor's house. Clearly the drone can legally use IR to look for fire in the house. But it gets complicated fast, which is what worries me. Now imagine a car backfires, a microphone array reports it as a gunshot, and a drone shows up and starts scanning the nearest apartment building.
There is no cold starts at all. It’s running non-stop.
Bunny bills per resource utilization (not provisioned) and since we run backend on Go it consumes like 0.01 CPU and 15mb RAM per idle container and costs pennies.
They're not coming out in favor of Chat Control -- they're coming out in favor of having some option where they can operate without violating the law.
The problem right now is that they can be held liable for distributing CSAM content on their services and, since April 3, they can also be fined if they try to detect that content. It's an impossible situation.
Now, I'm not claiming that these companies always have noble intentions. But there's nothing nefarious here -- they just want regulatory certainty: do X, Y, and Z and you won't be fined or sued.
Blatantly false. They aren't liable as long as they promptly action reports, just like everyone else.
My impression is that they don't like the bad PR currently associated with various debates surrounding use of social media by children. At the same time they don't want to implement various policies that would be popular with the general public but would hurt their bottom line (ie they don't want to do the right thing).
So instead they make a big deal about various imperfections to justify draconian solutions that would see them able to implement all sorts of privacy violating measures. Thankfully that failed so now they're engaging in a smear campaign.
The current conduct of these companies in this regard is openly evil.
It's not impossible; it's their centralised model that is. It's unthinkable to have private platforms on modern mainframes (data centers) instead of distributed, decentralised services where everyone holds a piece (DHT) or whatever they want (e.g. Nostr/Blossom), and is responsible for what they do.
It's impossible to imagine having democratic societies where four fat cats know everything about everyone and most people know almost nothing about them, where information, instead of being scattered everywhere for resilience, is concentrated in just a few hands.
Wait, are you doing this in the web chat interface?!
That's definitely not a good way. You need to be using a harness (like Claude Code) where the agent can plan its work, explore the codebase, execute code, run tests, etc. With this sort of set up, your prompts can be short (like 1 to 5 sentences) and still get great results.
reply