>>The FTC's 2024 rule banning fake social influence metrics carries penalties of $53,088 per violation - and the SEC has already charged startup founders for inflating traction metrics during fundraising
Six million fake stars is just what this small crew found, likely in a matter of hours.
A fine of $53,088 times six million is 318.528 billion.
Just going hard after a small portion of that should both put an end to it and a slight dent in the deficit.
This kind of fraud is rampant because everyone concludes the way to win is not to make a real advance, but to simply game the system. Seems they are not wrong because the lack of enforcement makes the rules meaningless.
Yes, Hamas and Hezbollah are the IRGC's remote occupying armies in the adjacent territories of Gaza and Lebanon, and they have launched rockets into Israel on a near-daily basis for years.
The difference is Ham/Hez rockets are small, unguided, generally aimed at residential or commercial areas to cause disruption, and are generally low-intensity and are routinely intercepted.
The differences are:
1) The attacks from Iran are large warheads on intermediate range ballistic missiles, with precision targeting.
2) They are fired in coordinated barrages along with drones and other rockets specifically intended to over-saturate the Israeli missile defenses.
3) The attacks are also targeted specifically at industrial infrastructure with the intent of causing maximal damage to the world economy.
4) The targeting uses high-precision satellite data and intelligence from Russia and China to cause maximum global damage.
Those are major differences in both quality and quantity of the attacks.
Before, the risk was relatively low: not targeting this type of industrial site, not using high-intelligence and high-precision targeting, not saturating, and using small warheads unlikely to cause major damage.
The situation since 28-Feb-2026 is entirely different.
So, you are saying it is unreasonable to take a company at its word.
You are saying it is unreasonable to understand the words a company uses as having the same meaning as understood by every other speaker, writer, listener, and reader.
And you are saying it is reasonable for a company to lie, to make claims that are the opposite of the words it uses.
Yes, the company's network resources are finite, and it is reasonable to put in limits on abuse. That does not (and should not) create a license to abuse the language. There are many other words a company could HONESTLY use to describe its biggest plan that do not mean infinite when they mean finite. Any competent marketer and lawyer can find a thesaurus.
(and the same applies to "Full Self Driving", an obvious lie in it's second decade.)
>>"never once seen a productionalized version of these"
YET
Just because we have not YET seen one does not mean it should not be pursued.
Examples are endless, start with: 30 years ago, no one had seen a solar panel with 25% efficiency produced for less than $1/watt. Now, it is the most economical and fastest-growing and most sustainable energy source on the planet.
That argument is simply an argument against all efforts at making progress. Perhaps rethink making it?
Not GP, but I've seen multiple credible news stories on this.
The problem isn't the red-light camera itself, it is that whoever installs/manages them also reduces the time of the yellow-light warning, so the red light comes on significantly sooner. The normal yellow light timing is a properly studied and engineered interval based on traffic and speed to give drivers sufficient warning to see, decide, and go or slow-stop in a safe and predictable fashion.
When the red-light-camera installers/managers decide to cut that time to increase infractions and increase revenue, they create situations where drivers think they are going to make it to the intersection in good time, but are surprised by the sooner-changing red light, so emergency-brake before the line. This causes accidents, including accidents where the car is pushed into the intersection and causes a rear-end then T-bone.
This invalid yellow-red light timing was revealed in some lawsuits about it.
I think the right solution is to maintain properly engineered timing, install cameras that also trigger a full video from multiple angles, and manually evaluate each positive and ticket only the egregious ones and have records of the violators who caused accidents.
But since the easy money is evidently too much of a temptation to fck with people, ban them all.
We already ban the thing that is being abused — running red lights. Yet people do it with deadly results so much we're looking for another solution.
With the cameras, the camera salespeople and the town managers just can't get away from "It increases revenue (and if we screw with the yellow-light-timing we can increase it even more!".
I'd be all over making any town manager and red-light-camera-salesperson involved in a decision to screw with the red lights personally and criminally liable for any accidents resulting from screwing with yellow-light timing, and requiring all timing before installation to be officially logged, but they'll try to find ways around that too. And then there is the whole surveillance capitalism thing — we've got the cameras, why not record all license plates, and tie them to driver license and voting records, and, and, and...
Red lights reduce injury and deaths due to car crashes.
Red light cameras do NOT do so, and when the companies/cities screw with the properly engineered yellow light timing to increase revenue, the cameras INCREASE crashes and injuries. It has been definitively studied multiple times, and cities have de-installed camera systems after these findings.
If cameras are installed without adjusting the yellow- light timing, the effect is not demonstrated. Thinking about it, a likely camera ticket will reduce one of three types of red-light-running, where an impatient driver slows at the red light, looks, then crosses anyway if traffic is light. That one causes few accidents — the careful drivers are already careful and either don't do it or do it with sufficient cautions accidents are extremely unlikely.
The others, where the driver fails to see the red light and blows through it, or where they are just on some criminal blast through town (evading cops, high on drugs, whatever), will not be deterred in the slightest by the cameras. The eyesight of the first will not be helped by a likely camera-automated-ticket, and the second already has far bigger legal problems coming and won't care.
So, explain the circumstances where a red light camera actually reduces accidents and injuries. I'm not even seeing a plausible measurable effect beyond revenue extraction.
An autonomous robot that I program, I update (or don't) as and when I see fit, and does not need to connect to the internet or to anything, for $20k that does dishes, or helps me lift things at the shop, and returning data to it's maker IF AND ONLY IF, AND WHEN, I CHOOSE (or dont)? Great - take my money!
An ambulatory machine with eyes, ears, touch sensors, continually watching, listening, observing, mapping, recording everything it encounters in my home and/or shop and sending all that data back to it's manufacturer "for improved user experience"? HELL, NO!
The latter, supplied by Musk, even if he's paying me $1 million per year to 'host' it? I'll buy the equipment to destroy it as soon as it comes onto my property.
I don't think there is any supplier I could even begin to trust when they require a connection. Can anyone here think of one?
>>Flares, the standard IR countermeasure, are less effective against imaging IR seekers that can distinguish an aircraft shape from point-source decoys.
> The 9K310 Igla-1 system and its 9M313 missile were accepted into service in the Soviet Army on 11 March 1981. The main differences from the Strela-3 included an optional Identification Friend or Foe system to prevent firing on friendly aircraft, an automatic lead and super elevation to simplify shooting and reduce minimum firing range, a slightly larger rocket, reduced drag and better guidance system extend maximum range and improve performance against fast and maneuverable targets, an improved lethality on target achieved by a combination of delayed impact fuzing, terminal maneuver to hit the fuselage rather than jet nozzle, an additional charge to set off the remaining rocket fuel (if any) on impact, an improved resistance to infrared countermeasures (both decoy flares and ALQ-144 series jamming emitters), and slightly improved seeker sensitivity.
> The seeker has two detectors – a cooled MWIR InSb detector for detection of the target and uncooled PbS SWIR detector for detection of IR decoys (flares). The built-in logic determines whether the detected object is a target or a decoy. The latest version (Igla-S) is reported to have additional detectors around the main seeker to provide further resistance against pulsed IRCM devices commonly used on helicopters.
> Since 2014 the Igla is being replaced in Russian service by the new 9K333 Verba (Willow) MANPADS.[4] The Verba's primary feature is its multispectral optical seeker, using three sensors as opposed to the Igla-S' two. Cross-checking sensors against one another better discriminates between relevant targets and decoys, and decreases the chance of disruption from countermeasures, including lasers that attempt to blind missiles.
No one's likely to have been surprised by this capability. It's 80s tech.
What you're describing is 2 or 3 sensors - effectively 2 or 3 pixels. Enough to discriminate when an aircraft launches a flare, but not really "imaging" in the modern sense.
Early heat seeking missiles would use a single IR sensor with mechanical scanning.
Thermal imaging and machine vision, of the kind you can now do cheaply, isn't 80s tech. It's probably late-90s tech for advanced western states. And now it's starting to be ubiquitous cheap tech too. You can buy a thermal imaging camera with 20k pixels for a few hundred dollars now. Combine that with some image processing and you've got a very robust target detection pipeline.
I don't think GP is talking about average users; they seem to be talking about decision-makers in organizations, e.g., a town board that wants to achieve digital independence, but is made unsure by apparent turmoil in the governance in open source orgs...
He is wrong about almost everything, and especially about introspection.
But he got lucky and wrote a good-enough-for-the-time browser at just the right time.
Now, he mistakes his luck and his F_U_Money for skill and intelligence. And why wouldn't he? He can simply walk away from any situation that makes it seem he is wrong.
And the broader problem in society is nearly the entire populace has been conditioned to ignore the factors of luck and mistake monetary success with hard work and wisdom, when in fact those people are often no more than massively amplified fools.
The massive follies of most these current robber barons makes the case for taxing them out of existence. Once someone has enough money that they and their family cannot spend it in multiple lifetimes of excessive luxury, the only reason to have more is power. We should ramp up tax rates so those people cannot accumulate that power.
Power corrupts; absolute power corrupts absolutely. A society that fails to manage that fact of human nature dooms itself.
Six million fake stars is just what this small crew found, likely in a matter of hours.
A fine of $53,088 times six million is 318.528 billion.
Just going hard after a small portion of that should both put an end to it and a slight dent in the deficit.
This kind of fraud is rampant because everyone concludes the way to win is not to make a real advance, but to simply game the system. Seems they are not wrong because the lack of enforcement makes the rules meaningless.
reply