I don't recall ever seeing USSR products in stores, while plenty of manufactured goods from other countries were. (By products I meant manufactured products, not extracted resources like oil.)
I got some Soviet Union produced wrenches and drill from my great grandfather and East Germany made drill bits from an auction despite nobody in my family living outside the US in 120 years. No it isn't common, but I wouldn't expect the Soviet Union's biggest rival to be importing many of their products to start with, so the fact I possess them at all is decent evidence of their significant production volume.
But no cars, washing machines, microwaves, electronics, furniture, apparel, and on and on. Kinda sad for the size of the country.
I bought some Soviet stuff after the fall of the USSR, because it was unique and interesting. One item was a telescope, one was a brand new rotary dial telephone manufactured in the 1950s, and one was a mechanical clock reputed to be from a submarine.
I'm only sad that I abandoned my phone line (as I only received spam calls on it) and so my Commie Phone is a nice, but useless, desk ornament.
I remember books (there was a famous soviet science publisher, which I believe we learned later had gulag deportees working on their printing presses) and I seem to recall toys and some foods.
My memory from the period is far from perfect, though, as I was a kid when the USSR collapsed.
I think that may have been a result of the political divide of that era. The USSR did export some machinery and arms, but those were traded largely within other Communist countries and "third world" countries.
Zen is great and still mostly Firefox. I use standard Firefox on Android and everything syncs without hassle. The experience is so much better that personally cannot imagine using Chromium anymore. Of course I do wonder if the entire Firefox ecosystem is sustainable long-term funding wise.
A 5.4 spin with slightly different guardrails is not "access to the latest models". We know this to be true from the article because they have a section entitled "Looking ahead to our upcoming model release and beyond". I wonder if they didn't just feel like they were caught out by Mythos.
Being marked an enemy of the state for disagreeing with the state to me sounds like thoughtcrime, plain and simple. How much more Orwellian can you get?
I remember neither that happening in 1984, nor is that a description of what is happening to Anthropic. Or is this is an Animal Farm reference instead?
I remember Winston having a private conversation about political beliefs, and then being literally tortured into submission. And I remember Anthropic refusing a government order (albeit a stupid government order), and then being labeled a "supply chain risk." You can twist reality however you'd like though.
You don’t remember the concept of thought crime in 1984? Or you don’t recall how thought crime gets you branded an enemy of the state? The former was a term literally introduced in 1984 and the thought police is tasked with locating and eliminating thought crime. Throughout the book there are news reports of the thought criminals caught and arrested who are now enemies of the state. The book ends with him being tortured until he completely succumbs to the thought control and is then murdered.
If you can’t see the allegory in that story to an administration that actively goes after those it labels as enemies because they dare to voice their own opinion or oppose their political goals in any way, either you’re not cut out for literary analysis and trying to apply metaphors in literature to the real world or you aren’t seeing the real world for what it is.
Ok, just labeling them a supply chain risk while also claiming they’re critical to national security for insisting the government stick to the powers to the model they agreed to in the contract and not expanding it.
> Their true objective is unmistakable: to seize veto power over the operational decisions of the United States military. That is unacceptable.
Yup, definitely not an enemy.
> Instead, @AnthropicAI and its CEO @DarioAmodei, have chosen duplicity
Don’t you call your friends duplicitous?
> Anthropic’s stance is fundamentally incompatible with American principles.
Oh boy. Doubleplus ungood.
> I am directing the Department of War to designate Anthropic a Supply-Chain Risk to National Security. Effective immediately, no contractor, supplier, or partner that does business with the United States military may conduct any commercial activity with Anthropic
Oh yeah, totally not an enemy. Just no one can do business with them. Doubleplusungood behavior.
They’re both a danger to US troops with their behavior and also critical to the supply chain of said troops. Very important to understand and accept that doublethink.
This doesn't require the slightest bit of doublethink. Their technology is fantastic and would be an important military tool if Anthropic allowed it to be used as such. Their choice to disallow it makes them a supply chain risk, but the existence of the technology makes them important. This isn't hard.
There's no need to read it that literally, we're not making Borges' map here. 1984 is both about the visceral horror of the authoritarian state and the existential horror of being unable to fight an opponent who controls the very language you speak and the concept of truth. The former grounds the latter, turning an interesting philosophical treatise that might otherwise not land with readers into an approachable work of fiction.
They got labeled a "supply chain risk" in order to prevent the government from contracting with them. They didn't disappear or arrest or even charge Dario. He's a billionaire with more freedom and opportunity than Orwell could have even imagined.
I would love to hear your perspective of how the label "supply chain risk" and its definition aren't in accordance with the concept of being branded an enemy of the state. I'll reproduce the definition below:
> “Supply chain risk” means the risk that an adversary may sabotage, maliciously introduce unwanted function, or otherwise subvert the design, integrity, manufacturing, production, distribution, installation, operation, or maintenance of a covered system so as to surveil, deny, disrupt, or otherwise degrade the function, use, or operation of such system (see 10 U.S.C. 3252). (https://www.acquisition.gov/dfars/subpart-239.73-requirement...)
There's a little bit of leeway here, but this definition means either the company is an adversary (or an extension of one, e.g. Huawei/the CCP) or is under threat of being compromised by an adversary.
So which is Anthropic? Well, neither: the government's court filings and public comments in the media claim that Anthropic has an "adversarial posture". They want to simultaneously get away with bucketing Anthropic under the statute for adversaries, but without calling Anthropic an adversary directly in a court of law. They want to apply the statute without needing to follow the actual definition of an adversary.
From a CNBC interview:
> We can't have a company that has a different policy preference that is baked into the model through its constitution, its soul, its policy preferences, pollute the supply chain so our warfighters are getting ineffective weapons, ineffective body armor, ineffective protection. That's really where the supply chain risk designation came from. (https://www.cnbc.com/2026/03/12/anthropic-claude-emil-michae...)
That's why the judge rightly called this situation Orwellian: we're looking at linguistic sleight of hand designed to allow the government to turn what is a simple contract dispute into a company-threatening classification that threatens to uproot them entirely from any company that does business with the most powerful entity in the United States. Because Anthropic doesn't want to do the government's bidding despite being allowed to as a matter of freedom of speech, they are being threatened with a punishment that goes beyond just not being able to contract directly with the government. And that's not fair.
I would also love to understand why you keep going back to the literal events of the book. You don't need to be locked in a room and forced to claim that 2+2=5 for your situation to be Orwellian.
> I remember Winston having a private conversation about political beliefs, and then being literally tortured into submission.
I remember Winston being forced to accept that 2+2=5 and believing it.
> In the end the Party would announce that two and two made five, and you would have to believe it. It was inevitable that they should make that claim sooner or later: the logic of their position demanded it. Not merely the validity of experience, but the very existence of external reality, was tacitly denied by their philosophy. The heresy of heresies was common sense. And what was terrifying was not that they would kill you for thinking otherwise, but that they might be right. For, after all, how do we know that two and two make four? Or that the force of gravity works? Or that the past is unchangeable? If both the past and the external world exist only in the mind, and if the mind itself is controllable—what then?
> And I remember Anthropic refusing a government order (albeit a stupid government order), and then being labeled a "supply chain risk." You can twist reality however you'd like though.
I remember when American companies could do domestic business, or not, with whomever they wished without having to worry about being punished by the government for their choices.
If a government orders a pacifist to pick up a gun, is that allowed? If a government orders a pacifist to manufacture a gun, is that allowed? (There's a spectrum of 'complicity'.)
> I remember when American companies could do domestic business, or not, with whomever they wished without having to worry about being punished by the government for their choices.
No you don't, because that time as never existed.
> If a government orders a pacifist to pick up a gun, is that allowed? If a government orders a pacifist to manufacture a gun, is that allowed? (There's a spectrum of 'complicity'.)
Yes. It's called the draft. It's called wartime manufacturing decrees. These all existed at the time of Orwell, and he never alluded to them being thoughtcrimes. Compelling people to act against their beliefs is common and distinct from throughtcrime. And if you cannot see that, then I don't even know how to talk to you. Government has always controlled your outer life. Orwell introduced thoughtcrime as the next step in totalitarianism, as the erasure of inner life.
edit: I asked Opus to analyze this thread, and I agree with it.
> That said, Orwell would probably also note that the people arguing against you aren't entirely wrong to be alarmed — they're just reaching for the wrong literary reference and overstating the analogy. Government retaliation against companies for political speech is concerning on its own terms without needing to be dressed up as dystopian fiction. The 1984 framing actually weakens the critique by making it easy to dismiss as hyperbolic.
> He'd probably tell everyone in the thread to say what they mean in plain language and stop hiding behind his book.
Sure, but also you might be on a city bus for... half an hour? It's not pleasant to have someone blast noise but it's nothing like a multi-hour flight. Why bother?
The bundling might feel necessary from Atari's side because OpenTTD would compete with Atari's re-release on platforms like Steam and GoG (unlike on OpenTTD's website, where you're already at the end of the funnel for OpenTTD specifically and therefore Atari doesn't feel like they're losing a sale).
> Today, we’re releasing a research preview of GPT‑5.3‑Codex‑Spark, a smaller version of GPT‑5.3‑Codex, and our first model designed for real-time coding.
You're right. It's funny because I kind of noticed that, but with all of these subtle model issues, I'm so used to being distraught by the smallest thing I've had to learn to 'trust the data' aka the charts, model standings, performance, etc. and in this case, I was under the assumption 'it was the same model' clearly it's not.
Which is a bummer because it would be nice to try a true side-by-side analysis.
It's less funny when you consider that you were very confident about it, yet now it seems you haven't even bothered to run the model yourself, as you'd notice how different the quality of responses were, not just the speed.
Kind of makes me ignore everything else you wrote too, because why would that be correct when you surely haven't validated that before writing it, and you got the basics wrong?
What a snide and insulting comment - and plainly wrong.
I literally stated 'I noticed that' - implying I'm using the model.
I'm 'running the model' literally as I write this, I use it every day.
What I was 'wrong' about was the very fine point that '5.3 Codex Spark' is a different model that '5.3 Codex' which is rather a fine point.
I 'thought that I noticed something, but dismissed it' because I value the facts generally more than my intuition. I just so happened that I had that one fact wrong - 'Spark' is technically a different model, so it's not just 'a faster model', it will 'behave differently' , which lends credence to the individual I was responding to.
Even if interpretability of specific models or features within them is an open area of research, the mechanics of how LLMs work to produce results are observable and well-understood, and methods to understand their fundamental limitations are pretty solid these days as well.
Is there anything to be gained from following a line of reasoning that basically says LLMs are incomprehensible, full stop?
>Even if interpretability of specific models or features within them is an open area of research, the mechanics of how LLMs work to produce results are observable and well-understood, and methods to understand their fundamental limitations are pretty solid these days as well.
If you train a transformer on (only) lots and lots of addition pairs, i.e '38393 + 79628 = 118021' and nothing else, the transformer will, during training discover an algorithm for addition and employ it in service of predicting the next token, which in this instance would be the sum of two numbers.
We know this because of tedious interpretability research, the very limited problem space and the fact we knew exactly what to look for.
Alright, let's leave addition aside (SOTA LLMs are after all trained on much more) and think about another question. Any other question at all. How about something like:
"Take a capital letter J and a right parenthesis, ). Take the parenthesis, rotate it counterclockwise 90 degrees, and put it on top of the J. What everyday object does that resemble?"
What algorithm does GPT or Gemini or whatever employ to answer this and similar questions correctly ? It's certainly not the one it learnt for addition. Do you Know ? No. Do the creators at Open AI or Google know ? Not at all. Can you or they find out right now ? Also No.
Let's revisit your statement.
"the mechanics of how LLMs work to produce results are observable and well-understood".
Observable, I'll give you that, but how on earth can you look at the above and sincerely call that 'well-understood' ?
It's pattern matching, likely from typography texts and descriptions of umbrellas. My understanding is that the model can attempt some permutations in its thinking and eventually a permutation's tokens catch enough attention to attempt to solve, and that once it is attending to "everyday object", "arc", and "hook", it will reply with "umbrella".
>It's pattern matching, likely from typography texts and descriptions of umbrellas.
"Pattern matching" is not an explanation of anything, nor does it answer the question I posed. You basically hand waved the problem away in conveniently vague and non-descriptive phrase. Do you think you could publish that in a paper for ext ?
>Why am I confident that it's not actually doing spatial reasoning? At least in the case of Claude Opus 4.6, it also confidently replies "umbrella" even when you tell it to put the parenthesis under the J, with a handy diagram clearly proving itself wrong
I don't know what to tell you but J with the parentheses upside down still resembles an umbrella. To think that a machine would recognize it's just a flipped umbrella and a human wouldn't is amazing, but here we are. It's doubly baffling because Claude quite clearly explains it in your transcript.
>I feel like I have a pretty good intuition of what's happening here based on my understanding of the underlying mathematical mechanics.
Yes I realize that. I'm telling you that you're wrong.
>Do you think you could publish that in a paper for ext ?
You seem to think it's not 'just' tensor arithmetic.
Have you read any of the seminal papers on neutral networks, say?
It's [complex] pattern matching as the parent said.
If you want models to draw composite shapes based on letter forms and typography then you need to train them (or at least fine-tune them) to do that.
I still get opposite (antonym) confusion occasionally in responses to inferences where I expect the training data is relatively lacking.
That said, you claim the parent is wrong. How would you describe LLM models, or generative "AI" models in the confines of a forum post, that demonstrates their error? Happy for you to make reference to academic papers that can aid understanding your position.
>You seem to think it's not 'just' tensor arithmetic.
If I asked you to explain how a car works and you responded with a lecture on metallic bonding in steel, you wouldn’t be saying anything false, but you also wouldn’t be explaining how a car works. You’d be describing an implementation substrate, not a mechanism at the level the question lives at.
Likewise, “it’s tensor arithmetic” is a statement about what the computer physically does, not what computation the model has learned (or how that computation is organized) that makes it behave as it does. It sheds essentially zero light on why the system answers addition correctly, fails on antonyms, hallucinates, generalizes, or forms internal abstractions.
So no: “tensor arithmetic” is not an explanation of LLM behavior in any useful sense. It’s the equivalent of saying “cars move because atoms.”
>It's [complex] pattern matching as the parent said
“Pattern matching”, whether you add [complex] to it or not is not an explanation. It gestures vaguely at “something statistical” without specifying what is matched to what, where, and by what mechanism. If you wrote “it’s complex pattern matching” in the Methods section of a paper, you’d be laughed out of review. It’s a god-of-the-gaps phrase: whenever we don’t know or understand the mechanism, we say “pattern matching” and move on, but make no mistake, it's utterly meaningless and you've managed to say absolutely nothing at all.
And note what this conveniently ignores: modern interpretability work has repeatedly shown that next-token prediction can produce structured internal state that is not well-described as “pattern matching strings”.
Transformers trained on Othello or Chess games (same next token prediction) were demonstrated to have developed internal representations of the rules of the game. When a model predicted the next move in Othello, it wasn't just "pattern matching strings", it had constructed an internal map of the board state you could alter and probe. For Chess, it had even found a way to estimate a player's skill to better predict the next move.
There are other interpretability papers even more interesting than those. Read them, and perhaps you'll understand how little we know.
>That said, you claim the parent is wrong. How would you describe LLM models, or generative "AI" models in the confines of a forum post, that demonstrates their error? Happy for you to make reference to academic papers that can aid understanding your position.
Nobody understands LLMs anywhere near enough to propose a complete theory that explains all their behaviors and failure modes. The people who think they do are the ones who understand them the least.
What we can say:
- LLMs are trained via next-token prediction and, in doing so, are incentivized to discover algorithms, heuristics, and internal world models that compress training data efficiently.
- These learned algorithms are not hand-coded; they are discovered during training in high-dimensional weight space and because of this, they are largely unknown to us.
- Interpretability research shows these models learn task-specific circuits and representations, some interpretable, many not.
- We do not have a unified theory of what algorithms a given model has learned for most tasks, nor do we fully understand how these algorithms compose or interfere.
I made this metaphor from my understanding of your comment.
Imagine we put a kid in a huge library of book who doesn't know how to write/read and knows nothing about what letter means etc. That kid stayed in the library and had a change for X amount time which will be enough to look over all of them.
what this will do is that not like us but somehow this kid managed to create patterns in the books.
After that X amount of time, we asked this Kid a question. "What is the capital of Germany?"
That kid will just have it is on kind of map/pattern to say "Berlin". Or kid might say "Berlin is the capital of the Germany" or "Capital of Germany is Berlin." The issue here is that we do not have the understanding of how this kid came of with the answer or what kind of "understanding" or "mapping" being used to reach this answer.
The other part basically shows we do not fully understand how LLM works is: Ask a very complex question to an AI. Like "explain me the mechanics of quantum theory like I am 8 years old".
1- Everytime, it will create differnt answer. Main point is the same but the letters/words etc would be different. Like the example I give above.There are unlimited type of answer AI can give you.
2- Can anyone in the Earth - a human - without a technology access for have unlimited amount of book/paper to check whatever info he needs - tell us the exact sentence/words will LLM use? No.
Then we do not have fully understand of LLM.
You can create a linear regression model and give it 100 people data and all these 100 people are blue eyed. Then give 101 person and ask it to predict the eye color. You already know the exact answer. It will be %100.
I think what you two are going back and forth on is the heated debate in AI research regarding Emergent Abilities. Specifically, whether models actually develop "sudden" new powers as they scale, or if those jumps are just a mirage caused by how we measure them.
The concept “understand” is rooted in utility. It means “I have built a much simpler model which produces usefully accurate predictions, of the thing or behaviour I seek to ‘understand’”. This utility is “explanatory power”. The model may be in your head, may be math, may be an algorithm or narrative, it may be a methodology with a history of utility. “Greater understanding” is associated with models that are simpler, more essential, more accurate, more useful, cheaper, more decomposed, more composable, more easily communicated or replicated, or more widely applicable.
“Pattern matching”, “next token prediction”, “tensor math” and “gradient descent” or the understanding and application of these by specialists, are not useful models of what LLMs do, any more than “have sex, feed and talk to the resulting artifact for 18 years” is a useful model of human physiology or psychology.
My understanding, and I'm not a specialist, is there are huge and consequential utility gaps in our models of LLMs. So much so, it is reasonable to say we don't yet understand how they work.
A DS set to Auto mode will boot to the cartridge (and you can reflash the firmware to skip the health and safety screen). From there the OS is replaced with whatever is on the cart. A flashcart with the right shell will boot right into whatever app you want (and you can soft reset the console with a key combination to switch apps).
3DSes require a little more work and have a longer boot chain, but it's been thoroughly broken all the way to the bootstrapping process so you can use whichever firmware version and whatever patches you like with enough effort.
Once a DS has been flashed (skips the health and safety screen) it also disables signature verification for DS download play, so you can beam homebrews directly to your DS' home screen with a wifi card. But this is an awkward process that most people don't actually do with their original DSes, as it requires putting tinfoil over a toothpick and jamming it into a hole next to the battery to close the flash write jumper. I think DS' crypto has also been defeated but I can't find any documentation of arbitrary download play on unflashed DSes. Also seems no .nds signing keys in the leaks from what I can tell.
reply