Hacker Newsnew | past | comments | ask | show | jobs | submit | windexh8er's commentslogin

Do people really need or want more bandaids?

Especially because it clearly showcases Garry Tan's (YC) grandeur delusions. Not only has he gone full state surveillance bullshit with Flock but also understands absolutely zero that he's vibe coding for 16 hours a day. And, shocker: it's pure slop!

These aren't labeling cases. Durnell is the one the Supreme Court took, but it's one of tens of thousands. John Durnell sued Monsanto in Missouri state court after getting non-Hodgkin's lymphoma from twenty years of spraying Roundup as the "spray guy" for his neighborhood, and the jury gave him $1.25 million for his cancer, not a fine for a missing sticker. The legal theory is "failure to warn," but that's a tort claim about whether Monsanto adequately communicated the risk to users who then got hurt, not a regulatory question about what text has to appear on the bottle. Earlier California verdicts followed suit. Juries found Monsanto liable for the plaintiffs' cancer under regular product liability law. But, none of these are Prop 65 enforcement actions. [0]

The Jays chips comparison cuts the other way. CA's Prop 65 warning for glyphosate got blocked by a federal court in 2020, the Ninth Circuit upheld the block in 2023, and Prop 65 warnings for acrylamide in food were permanently shut down last May. So... California isn't actually making Roundup carry a Prop 65 warning, which is what your chips comparison assumes. The real question in Durnell is whether federal pesticide law stops a Missouri jury from finding Bayer liable for a specific person's cancer. Pretty different from whether you slap a warning sticker on a bag of chips (and Jay's doesn't carry a "Not for Sale" in CA -that's generally smaller companies who couldn't afford reformulation but the reality is they likely just didn't sell there). [1]

[0] https://legal-planet.org/2026/02/03/pesticides-cancer-and-fa... [1] https://www.greenbergglusker.com/publications/court-finds-re...


I didn't say they had to pay a fine; I said they lost the strict-liability duty to warn claim, one of three, which requires Monsanto in that state to warn of any potential risks known at the time of manufacture. I think we're all clear it's a tort claim!

But you did imply they were labeling cases. That is not the case and the basis of my response.

It's fun to pretend the US models have no censorship constraints.

US models align with our "average" (western) values. If we outsource thinking by using LLMs, why would we outsource it to an LLM that doesn't have our values encoded in it?

I remember asking Gemini about that one famous 9/11 joke from late Norm MacDonald and it got really iffy about answering. Told it that hey I'm not american and in our culture it's not such a taboo.

But yes, they do have similar constraints.


Any source for this?

Basically any frontier model right now and ask it any politically divisive fact that may upset certain classes of people.

For example?

Because for Deepseek is pretty straightforward censorship.



It doesn't look like self censoring at all - basically you want the default behavior of llms to gamble on the ethnicity of someone based on how they look.

Grok used a book as a reference.

It's not like ethnicity is a fact you infer from looking at someone.

Now ask Deepseek about what happened in Tiananmen Square and watch what censorship actually looks like.

It literally knows the facts, but then there's a layer that prevents it from stating the facts.

That's censorship.

It's not an opinion, it's not a choice when facing a gradient, it's just an historical known fact.


I look at this as Google needs a competitor. While Anthropic seems to be the flavor of the quarter OAI looks like such a dumpster fire right now that it's in Google's best interest to help keep Anthropic moving towards winning the #2 spot. I say the #2 spot because it doesn't matter how good this week's LLM is. Until someone else owns the infra and has an actually profitable business model they're all just lighting money and the world around us on fire.

I actually mentioned to a Google friend the other week that I wouldn't be surprised to see Google tipping the hat towards Anthropic soon so as to put a little more heat on OAI.


> Like yesterday? LLM-assisted coding is $100/mo. It looks very commoditized when most houses in developed world pay more for electricity than that.

But, it's not $100/mo. I think the best showcase of where AI is at is on the generative video side. Look at players like Higgsfield. Check out their pricing and then go look at Reddit for actual experiences. With video generation the results are very easy to see. With code generation the results are less clear for many users. Especially when things "just work".

Again, it's not $100/month for Anthropic to serve most uses. These costs are still being subsidized and as more expensive plans roll out with access to "better" models and "more* tokens and context the true cost per user is slowly starting to be exposed. I routinely hit limits with Anthropic that I hadn't been for the same (and even less) utilization. I dumped the Pro Max account recently because the value wasn't there anymore. I am convinced that Opus 3 was Anthropic's pinnacle at this point and while the SotA models of today are good they're tuned to push people towards paying for overages at a significantly faster consumption rate than a right sized plan for usage.

The reality is that nobody can afford to continue to offer these models at the current price points and be profitable at any time in the near future. And it's becoming more and more clear that Google is in a great position to let Anthropic and OAI duke it out with other people's money while they have the cash, infrastructure and reach to play the waiting game of keeping up but not having to worry about all of the constraints their competitors do.

But I'd argue that nothing has been commoditized as we have no clue what LLMs cost at scale and it seems that nobody wants to talk about that publicly.


> I think the best showcase of where AI is at is on the generative video side. Look at players like Higgsfield. Check out their pricing and then go look at Reddit for actual experiences. With video generation the results are very easy to see

Video is a different ballgame entirely, its less than realtime on _large_ gpus. moreover because of the inter-frame consistency its really hard to transfer and keep context

Running inference on text is, or can be very profitable. its research and dev thats expensive.


My point wasn't the delta in work between video and text generation. It was that the degradation of a prompt is much more visible (because: literal). But, generally agree on the research/dev part.

I've been using Fastmail since late 2014 and have been happy with the lack of "features" they've chased. I still have a grandfathered Google Workspace account as well as a handful of Gmail addresses and the difference, at this point, is stark. All of the "convenience" features in Gmail amongst the dark patterns of user data collection is pretty atrocious.

Kudos to the Fastmail team for keeping it classy. The MCP implementation may be a great way to leverage some local models to help clean up years of things I no longer need but don't want to waste the time on.


And we don't think the judge can/will be gamed? Also... It's an LLM, it's going to add delay and additional token burn. One subjective black box protecting another subjective black box. I mean, what couldn't go wrong?


you can use a safety model trained on prompt injections with developer message priority.

user message becomes close to untrusted compared to dev prompt.

also post train it only outputs things like safe/unsafe so you are relatively deterministic on injection or no injection.

ie llama prompt guard, oss 120 safeguard.


Unfortunately it's not that simple. Self-policing AI systems will always be gamed. Just one [0] example of this among many.

[0] https://www.hiddenlayer.com/research/same-model-different-ha...


Clearly the Meta execs they hired are about as useful as most 3-letter exec titles because, wow, did OAI miss the boat again. Personally I'm glad they've made as many missteps as they have, but quite the amateur move to not seize the market opportunity and keep it holistically for themselves. They took nothing from Google's paved road of incumbency in this segment.

Again, personally, I'm glad at yet another miss by Altman. But to claim ChatGPT is too new? Apparently hundreds of millions of users doesn't cut it these days. And if anyone thinks OAI has been anything remotely "strategic" around their product, well... Then you must enjoy shooting darts in the dark.


This appears to be more like a toxic rant than a reasonable argument.

> quite the amateur move to not seize the market opportunity and keep it holistically for themselves

What does this even mean? There are so many businesses, especially in the advertising world, that first start white-label reselling so that you can scale up super easy and quickly. Then once market is captured, you integrate everything. This is a common adtech playbook, and the Meta execs know that as well.

And I say this as someone who founded & exited their own adtech platform.

I would not recommend OpenAI to start developing an RTB platform right now at all. Just first prove there is a market and the value is there.

> They took nothing from Google's paved road of incumbency in this segment.

Google bought / acquired themselves into the online adtech market mostly. Yes they have adwords, which was only really becoming something a decade after Google launched, which they paired with their acquisition of half the adtech giants (DoubleClick, Invite and AdMeld). So yeah, not a great example.

> I'm glad at yet another miss by Altman. But to claim ChatGPT is too new? Apparently hundreds of millions of users doesn't cut it these days.

This is just a useless attack for no reason.


> This appears to be more like a toxic rant than a reasonable argument.

Thank you for your subjective analysis.

> What does this even mean? There are so many businesses, especially in the advertising world, that first start white-label reselling so that you can scale up super easy and quickly. Then once market is captured, you integrate everything. This is a common adtech playbook, and the Meta execs know that as well.

This would be interesting if any of it were true in the case of OAI. They haven't captured the market and they don't appear as if they will. They're at the losing end of Anthropic and Google right now. The Meta execs OAI has don't understand the game today, in my opinion based on their approach.

> And I say this as someone who founded & exited their own adtech platform. > I would not recommend OpenAI to start developing an RTB platform right now at all. Just first prove there is a market and the value is there.

So OAI, their financials, their business models, their number of customers and their competition all aligned with your exit in adtech? Somehow I doubt it, but feel free to share.

> Google bought / acquired themselves into the online adtech market mostly. Yes they have adwords, which was only really becoming something a decade after Google launched, which they paired with their acquisition of half the adtech giants (DoubleClick, Invite and AdMeld). So yeah, not a great example.

Actually, it is a good example. Because when Google did this they had zero competition. Now Google is the competition. So, yeah... In line with the new reality. You don't get to compare OAI with Google 20+ years ago.

> This is just a useless attack for no reason.

No, it's reality for a lot of us. Altman is tied to a lot of unsavory players. If you want to apologize for these types of people then feel free to be their cheerleader. The great part of HN is that these comments have and carry market sentiments (all across the board). I could say your comments are useless, out-of-touch, founder drivel... But I haven't.


It just happens to be a lot worse now. Confidence through ignorance has come into the spotlight with the commoditization of LLMs.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: