Given the ubiquity of react, I think immutability is generally rated pretty appropriately. If anything, I think mutability is under-rated. I mean, it wouldn't be applicable to the domain of Temporal, but sometimes a mutable hash map is a simpler/more performant solution than any of the immutable alternatives.
Yes, you can mutate props. But no, it's probably not going to do what you want if you did it intentionally. If react added Object.freeze() (or deepFreeze) to the component render invoker, everything would be the same, except props would be formally immutable, instead of being only expected to be immutable. But this seems like a distinction without much of a difference, because if you just try to use a pattern like that without having a pretty deep understanding of react internals, it's not going to do what you wanted anyway.
Well, mutability is the default, and React tries to address some of the problems with mutability. So React being popular as a subecosystem inside a mutable environment isn't really evidence that people are missing out on the benefits of mutability.
Though React is less about immutability and more about uni-directional flow + the idiosyncrasy where you need values that are 'stable' across renders.
React doesn’t really force you to make your props immutable data. Using mutable data with React is allowed and just as error prone as elsewhere. But certainly you are encouraged to use something like https://immutable-js.com together with React. At least that’s what I used before I discovered ClojureScript.
Immutability is often promoted to work around the complexity introduced by state management patterns in modern JS. If your state is isolated and you don't need features like time travel debugging, mutable data structures can be simpler and faster. Some so-called immutable libraries use hidden mutations or copy-on-write, which can actually make things slower or harder to reason about. Unless you have a specific need for immutability, starting with mutable structures is usually more sane.
protobuf solved serialization with schema evolution back/forward compatibility.
Skir seems to have great devex for the codegen part, but that's the least interesting aspect of protobufs. I don't see how the serialization this proposes fixes it without the numerical tagging equivalent.
The implicit version is brittle design for backwards compatibility.
People/LLMs will keep adding fields out of order and whatever has been serialised (both in client/server interaction, and stored in dbs) will be broken.
At 2'58'' you can see a frame of them projecting on Senate House, London.
During WW2 that was used by the Ministry of Information, and it inspired Orwell's description for the building of the Ministry of Truth. His wife Eileen worked in the building for the Censorship Department.
This guy from Effective Altruism pivoted away from helping the poor to help try to control AI from being a terminator type entity and then pivoted to being, ah, its okay for it to be a terminator type entity.
> Holden Karnofsky, who co-founded the EA charity evaluator GiveWell, says that while he used to work on trying to help the poor, he switched to working on artificial intelligence because of the “stakes”:
> “The reason I currently spend so much time planning around speculative future technologies (instead of working on evidence-backed, cost-effective ways of helping low-income people today—which I did for much of my career, and still think is one of the best things to work on) is because I think the stakes are just that high.”
> Karnofsky says that artificial intelligence could produce a future “like in the Terminator movies” and that “AI could defeat all of humanity combined.” Thus stopping artificial intelligence from doing this is a very high priority indeed.
> then pivoted to being, ah, its okay for it to be a terminator type entity.
Isn’t that the opposite of what he’s saying? He’s saying it could become that powerful, and given that possibility it’s incredibly important that we do whatever we can to gain more control of that scenario
The quote was from 2022 for the first pivot to AI to prevent it from becoming a terminator style entity. The last pivot was not in the quote but is the topic of this current Hacker News post, where takes credit for dropping the safety pledge:
"That decision included scrapping the promise to not release AI models if Anthropic can’t guarantee proper risk mitigations in advance."
I expect the next pivot will be that we need to allow the US military to use Anthropic to kill people because otherwise they will use a less pure AI to kill people and our Anthropic is better at only killing the bad guys, thus it is the lesser evil.
> I generally think it’s bad to create an environment that encourages people to be afraid of making mistakes, afraid of admitting mistakes and reticent to change things that aren’t working
Incredibly long and verbose. I will fall short of accusing him of using an AI to generate slop, but whatever happened to people's ability to make short, strong, simple arguments?
If you can't communicate the essence of an argument in a short and simple way, you probably don't understand it in great depth, and clearly don't care about actually convincing anybody because Lord knows nobody is going to RTFA when it's that long...
At best, you're just trying to communicate to academics who are used to reading papers... Need to expect better from these people if we want to actually improve the world... Standards need to be higher.
Have you seen some of the stuff in the Enron or Epstein emails? They can be rather candid and act as if there is nothing to hide or they will never get caught
reply