Imagine training a chess bot to predict a valid sequence of moves or valid game using the standard algebraic notation for chess
Great! It will now correctly structure chess games, but we've created no incentive for it to create a game where white wins or to make the next move be "good"
Ok, so now you change the objective. Now let's say "we don't just want valid games, we want you to predict the next move that will help that color win"
And we train towards that objective and it starts picking better moves (note: the moves are still valid)
You might imagine more sophisticated ways to optimize picking good moves. You continue adjusting the objective function, you might train a pool of models all based off of the initial model and each of them gets a slightly different curriculum and then you have a tournament and pick the winningest model. Great!
Now you might have a skilled chess-playing-model.
It is no longer correct to say it just finds a valid chess program, because the objective function changed several times throughout this process.
This is exactly how you should think about LLMs except the ways the objective function has changed are significantly significantly more complicated than for our chess bot.
So to answer your first question: no, that is not what they do. That is a deep over simplification that was accurate for the first two generations of the models and sort of accurate for the "pretraining" step of modern llms (except not even that accurate, because pretraining does instill other objectives. Almost like swapping our first step "predict valid chess moves" with "predict stockfish outputs")
Not that this means the big AI corps should relax their values (it truly doesn't), but I would be extremely surprised if the DoD/DoW doesn't have anyone capable of fine tuning an open weights model for this purpose.
And, I mean, if they don't, gpt 5.3 is going to be pretty good help
Given the volume fine tuning a small model is probably the only cost effective way to do it anyway
They do note that their contract language specifically references the laws as they exist today.
Presumably if the laws become less restrictive, that does not impact OpenAI's contract with them (nothing would change) but if the laws become more restrictive (eg certain loopholes in processing American's data get closed) then OpenAI and the DoD should presumably^ not break the new laws.
^ we all get to decide how much work this presumably is doing
You can construct sequences of rational numbers where the limit is not rational (eg it's sqrt 2)
Trivially, the sequence of numbers who are the truncated decimal expansion of root 2 (eg 1.4, 1.41. 1.414, ...) although I find this somewhat unsatisfying.
With the real numbers there are no gaps. There are no sequences of reals where the limit of that sequence is not a real number
They sound exactly like George Bush and every other American leader who's claimed high minded ideals while they engage in interventions in direct contradiction to those ideals around the world
To be clear, I don't think anthropic is itself intervening.
The concerns they've raised about authoritarianism is "AI enabling authoritarians."
When they push back on the US government wanting to use Claude to (legally) surveil US citizens, that still feels consistent to me as a concern about authoritarianism.
I think it's reasonable to hear high minded ideals and become skeptical, but in this case I'm surprised that people are trying to accuse them of hypocrisy
Example post https://www.reddit.com/r/running/comments/7tnzxy/stravas_hea...
reply