Hacker Newsnew | past | comments | ask | show | jobs | submit | wrsh07's commentslogin

It was also Strava, and it showed "popular running routes"

Example post https://www.reddit.com/r/running/comments/7tnzxy/stravas_hea...



You've attached to the $10b number - is that illustrative or is there something specific you're referring to?


Same. I can already view plain text in vim in ghostty. At the very least I'm not understanding what the value add is here.


Imagine training a chess bot to predict a valid sequence of moves or valid game using the standard algebraic notation for chess

Great! It will now correctly structure chess games, but we've created no incentive for it to create a game where white wins or to make the next move be "good"

Ok, so now you change the objective. Now let's say "we don't just want valid games, we want you to predict the next move that will help that color win"

And we train towards that objective and it starts picking better moves (note: the moves are still valid)

You might imagine more sophisticated ways to optimize picking good moves. You continue adjusting the objective function, you might train a pool of models all based off of the initial model and each of them gets a slightly different curriculum and then you have a tournament and pick the winningest model. Great!

Now you might have a skilled chess-playing-model.

It is no longer correct to say it just finds a valid chess program, because the objective function changed several times throughout this process.

This is exactly how you should think about LLMs except the ways the objective function has changed are significantly significantly more complicated than for our chess bot.

So to answer your first question: no, that is not what they do. That is a deep over simplification that was accurate for the first two generations of the models and sort of accurate for the "pretraining" step of modern llms (except not even that accurate, because pretraining does instill other objectives. Almost like swapping our first step "predict valid chess moves" with "predict stockfish outputs")


I think Maggie uses home-cooked here and it works for me because of the extended analogy

https://maggieappleton.com/home-cooked-software


Not that this means the big AI corps should relax their values (it truly doesn't), but I would be extremely surprised if the DoD/DoW doesn't have anyone capable of fine tuning an open weights model for this purpose.

And, I mean, if they don't, gpt 5.3 is going to be pretty good help

Given the volume fine tuning a small model is probably the only cost effective way to do it anyway


Contrary to benchmarks, open weight models are way behind the frontier.


My point is that you don't want a big model for the kind of analysis being discussed here

Even if they were paying frontier prices they would be choosing 5 mini or nano with no thinking

At that point, a fine tuned open source model is going to be on the pareto frontier


They do note that their contract language specifically references the laws as they exist today.

Presumably if the laws become less restrictive, that does not impact OpenAI's contract with them (nothing would change) but if the laws become more restrictive (eg certain loopholes in processing American's data get closed) then OpenAI and the DoD should presumably^ not break the new laws.

^ we all get to decide how much work this presumably is doing


> They do note that their contract language specifically references the laws as they exist today.

Where?

> The system shall also not be used for domestic law-enforcement activities except as permitted by the Posse Comitatus Act and other applicable law.

Sounds like it's worded to specifically apply to whatever law is currently applicable, no?


You can construct sequences of rational numbers where the limit is not rational (eg it's sqrt 2)

Trivially, the sequence of numbers who are the truncated decimal expansion of root 2 (eg 1.4, 1.41. 1.414, ...) although I find this somewhat unsatisfying.

With the real numbers there are no gaps. There are no sequences of reals where the limit of that sequence is not a real number


They didn't claim to have pacifist ideals

In fact, they claim to be pro America and pro democracy and have repeatedly expressed concerns about autocratically governed countries.

Just because you disagree with their ideals doesn't mean they're not holding to theirs


They sound exactly like George Bush and every other American leader who's claimed high minded ideals while they engage in interventions in direct contradiction to those ideals around the world


To be clear, I don't think anthropic is itself intervening.

The concerns they've raised about authoritarianism is "AI enabling authoritarians."

When they push back on the US government wanting to use Claude to (legally) surveil US citizens, that still feels consistent to me as a concern about authoritarianism.

I think it's reasonable to hear high minded ideals and become skeptical, but in this case I'm surprised that people are trying to accuse them of hypocrisy


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: