from my perspective--I have to use React, Lit, and all kinds of other creative solutions at my day job--I'm going to immediately devalue someone's argument if it starts with "I hate React".
React is not popular simply because engineers hate themselves or enjoy pain. There are problems it solves, and problems it creates. Explain what problems your solution solves, and feel free to dunk on React while you're at it, but write a tagline like this and I'm not gonna take you seriously.
"GitHub's own security guidance recommends pinning actions to full commit SHAs as the only truly immutable way to consume an action"
Why doesn't GitHub just enforce immutable versioning for actions? If you don't want immutable releases, you don't get to publish an Action. They could decide to enforce this and mitigate this class of issue.
> Why doesn't GitHub just enforce immutable versioning for actions?
I always wish these arguments came with a requirement to include a response to "well, what about the other side of the coin?", otherwise, you've now forced me to ask: well?
The two sides of the coin: Security wants pinned versions, like you have, so that compromises aren't pulled in. Security does not want¹ pinned versions, so that security updates are pulled in.
The trick, of course, is some solution that allows the latter without the former, that doesn't just destroy dev productivity. And remember, …there is no evil bit.
(… I need to name this Law. "The Paradox of Pinning"?)
(¹it might not be so explicitly state, but a desire to have constant updated-ness w/ security patches amounts to an argument against pinning.)
> it might not be so explicitly state, but a desire to have constant updated-ness w/ security patches amounts to an argument against pinning
When you want to update, you update the hashes too. This isn’t an issue in any other packaging ecosystem, where locking (including hashing) is a baseline expectation. The main issue is developer ergonomics, which comes back to GitHub Actions providing very poor package management primitives out of the box.
(This is the key distinction between updating and passively being updated because you have mutable pointers to package state. The latter gets confused for the former, but you almost always want the former.)
This isn't a bad distinction that you've made, I just think even lockfiles (what you're suggesting, essentially) still fall prey to the same paradox I'm suggesting.
Yes, lockfiles prevent "inadvertent" upgrades, in the sense that you get the "pinned" version in the lockfile. So if we go with the lockfile, we're now on the "pinned" side of the paradoxical coin. Yes, we no longer get auto-pwned by supply chain, but security's problem is "why are we not keeping up to date with patches?" now, since the lockfile effectively prevents them.
And then you see tooling get developed, like what Github has in the form of Dependabot, which will automatically update that lockfile. Now we're just back to the other side of the paradoxical coin, just with more steps.
(This isn't to say we shouldn't do lockfiles. Lockfiles bring a lot of other benefits, and I am generally in favor of them. But I don't think they solve this problem.)
I don’t think this is a paradox, it’s just a process. You use lockfiles to establish consistent resolutions, and then you use dependency management tooling to update those lockfiles according to various constraints/policies like compatibility, release age, known vulnerabilities, etc.
(Another framing is that you might want floating constraints for compatibility reasons, but when actually running software you basically never want dependencies changing implicitly beneath you, even if they fix things. Fixes should always be legible, whether they’re security relevant or not.)
Their question isn't about pinned versions, it's about immutable versions. The question is why it is possible to change what commit "v5" refers to, not "why would you want to write v5".
You already don't get updates pulled in with the system unless they swap the version out from under you, which is not a normal way to deploy.
Version tags should obviously be immutable, and if you want to be automatically updated you can select 1.0.*, if you don't you just pick the version tag.
It amounts to an argument against pinning in a (IMO) weird world view where the package maintainer is responsible for the security of users' systems. That feels wrong. The user should be responsible for the security of their system, and for setting their own update policy. I don't want a volunteer making decisions about when I get updates on my machine, and I'm pretty security minded. Sure, make the update available, but I'll decide when to actually install it.
In a more broad sense I think computing needs to move away from these centralised models where 'random person in Nebraska'[0] is silently doing a bunch of work for everyone, even with good intentions. Decisions should be deferred to the user as much as possible.
Auto upgrade to version deemed OK by security team. Basically you need to get updates that patch exploits then wait and be more patient for feature upgrades.
So, in the context of me questioning "yes, but exactly how is this supposed to work", you're essentially punting the question into a black box that won't betray us.
In the real world, though, we don't have a magic little black box: we have to actually implement that.
The only answer I have seen from real world security teams is variations of "why wouldn't we be keeping up with updates?", and that's an unpinned dep.
> You can't. They can execute arbitrary code. They can download another bash file via Curl and execute that.
Presumably you'd check the code of the action before you include it (and then don't use an action with non-pinned versions). This way you know the action won't execute arbitrary code for this version and won't get any other code because of version pinning.
The docker action you linked is ironic in this regard since every other version in the code seems to be pinned except the one you linked to.
This recommendation is currently broken. Even when you pin the full commit SHA for an action, that action may still pull in transitive dependencies (other actions) that aren't pinned.
A better question perhaps is why we’ve allowed ourselves to be so vulnerable by a single provider (GitHub). Supply chain attacks would have a significantly smaller blast radius if people start using their own forges. GitHub as a social network is no longer a good idea
I think that GitHub should set up Actions so that whenever you run a Github Actions step, it checks to see if either you have pinned it to a SHA or if the repository has immutable tags configured. If not, put a giant warning at the top of every pipeline run so that people are aware of the issue.
Even then, that's only immutable for the workflow config. Many workflows then go on to pull in mutable inputs downstream (eg: default to "latest" version).
Because the true name of the feature is VisualSourceSafe actions. It's all over the code of the runner if you take a second to look, and the runner, like the rest of the feature, is of typical early 2000s Microsoft quality, which is to say, none at all.
I assume this is because it is modeled after git tags, and at this point it would be a major change to move away from this. But it should probably get started at some point.
Stores and Components are basic classes that don't introduce any new concepts (other than the fact that the JSX goes into the template method of the Component, and that they are reactive behind the scenes). There are no hooks like useState, and the design philosophy is that everything should feel as native and natural as JavaScript.
You are bringing up an important topic. The way I see it is that Gea's Store is a plain old JS class. It's just a native class. There really is no special syntax you need to pay attention to. Whereas Solid signals require you to follow a specific syntax and approach, and has its own gotchas. Like, the language doesn't have a createSignal method by default, and you don't "execute" what look like values in JS as you need to do in Solid, and although I'm looking forward to the official Signal API, Solid isn't following that either.
That's basically how Gea is more native, because stores are plain classes. I hope this clarifies my point a little bit more.
It's just a native class. There really is no special syntax you need to pay attention to.
Don't confuse syntax with code. Solid has no special syntax (other than JSX of course).
This isn't comparing apples to apples.
Solid has a Store primitive too, and it's a "plain old" proxied object.
How is `createStore` less native than `new Store()`? The `new` keyword didn't even exist in JS until 2015, and the underlying semantics (prototypical inheritance) never changed.
One of Solid's design goals is fine-grained reactivity for very high performance. That's why signals are getter functions, because calling them alerts the system to the precise "scope" that will need to be re-run when the value changes.
Since signals are functions, they can be composed easily, and passed around as values.
I'm not following—the `new` keyword has been with us since JavaScript's inception. You might be confusing it with the `class` syntax, but before that we could always `new` functions.
And yes, Solid has signals that require you to know how to write and work with them. I answered another comment on the thread about Solid stores—they also introduce a couple of gotchas and weird syntax. You just can't do `this.users.push(...)` or `this.users[2].loggedIn = true` in Solid stores.
Therefore `createStore` is less native than `new Store()`, because `new Store()` just gives you a plain (proxied) object you can manipulate in various ways and reactivity will persist thanks to the compiler.
And Gea's design goal is also fine-grained reactivity, which it delivers without getter functions in the code that the developer writes, but rather, the handlers are generated via the compiler.
Solid stores are a great improvement over raw signals, but they still come with their own gotchas. First off, it's an entirely new syntax. You need to learn its documentation. You always have to use setStore, and it has a weird syntax like `setStore("users", 2, "loggedIn", false)` and even pushing items to an array is weird. In Gea it's just regular JavaScript: `this.users[2].loggedIn = false` or `this.users.push(...)`. MobX also comes with its own syntax.
In the end Gea is basically as simple as a plain old JavaScript class, made reactive by its compiler.
This is a design choice, and explained in their docs:
Separating the read and write capabilities of a store provides a valuable
debugging advantage.
This separation facilitates the tracking and control of the components that
are accessing or changing the values.
> You need to learn its documentation. You always have to use setStore, and it has a weird syntax like `setStore("users", 2, "loggedIn", false)` and even pushing items to an array is weird.
It's optional, and incidentally quite expressive and powerful. But they also support mutable drafts out of the box.
import { produce } from 'solid';
setStore(produce(state => {
state.users[2]?.loggedIn = false;
}))
I understand this bit. The bit that I don't understand is how you compare the two invented concepts like `setStore` and `produce` to just `state.users[2]?.loggedIn = false`. To me it's very clear Gea's syntax requires you to write less code, while also requiring you to know less concepts.
The value judgement implied in "invented concepts" is kind of weird, and maybe gets at a core difference in how you and I think about this.
Frameworks have APIs; they define concepts. Learning concepts isn't a bad thing in and of itself. Especially if they are concepts which let you model your application more succinctly and efficiently.
What you mean is that you are leaving it to the user to learn (or conceive of) additional concepts which are external to Gea to in order to build non-trivial reactive applications.
But "Gea requires you to write less code / know fewer concepts" can be reframed as "Gea opts out of solving some types of vanilla JS boilerplate for you". When you don't give your users "concepts", they're still going to end up writing a lot of code and learning concepts, just not within your API.
I see mpalmer counters each specific claim, dashersw shifts to a slightly different argument rather than directly addressing the rebuttal.
One guy is doing the tech founder equivalent of a TED Talk ("my thing is more native and requires fewer concepts!") while another is quietly pointing out that the emperor has no clothes, and has receipts. One keeps doubling down because this is clearly his baby, while other is just some experienced dev who's watched too many "simple mutable state" frameworks turn into maintenance nightmares.
One person is selling a vision. The other is explaining why that vision has been tried and mostly rejected by the industry for good reasons.
Only one of them is learning from the conversation.
Thank you for the discussion, I find it very interesting and I'd love to understand how you think. Why do you think setStore and produce let you model your application more succinctly and efficiently than just a direct assignment?
And what kind of types of boilerplate do you see Gea is opting out of?
Let's briefly set aside your belief that because JS supports mutation, a framework should as well.
Immutability and one-way dataflow is an unquestionable productivity win. It eliminates an entire class of complexity, and results in well-defined boundaries for the components of your application. With two-way data binding, those boundaries have to be carefully recognized and preserved by the developer every time they touch the code.
So one place Gea won't save devs any time or grief is in testing. If any part of the app can affect any other part of the app, the surface area of a change very quickly becomes unknowable, and you are only as informed as your tests are thorough. Not boilerplate in the literal sense, but quite a bit of overhead in the form of combinatorial test cases.
Yes, JS has mutability. Yes, you can make two-way data binding work as a framework feature. That you should is an argument I don't think you've successfully made yet.
Let me ask - why do you think JSX lets you model your application more succinctly and efficiently than just a direct createElement call?
I see your point. I designed Gea to be one-way binding only first, and then decided to add two-way binding for props passed as objects. People can still easily only use one-way binding. Maybe this becomes a preference in the compiler config?
The argument for Gea to support two-way binding is basically circular and I believe well-made at this point. I want a framework to respect a language. Breaking two-way binding when it's a concept in the underlying language is like breaking Liskov's Substitution Principle. You can do it, but you probably shouldn't.
JSX is more succinct and efficient than raw DOM API because it's declarative, where the raw API is imperative.
> Maybe this becomes a preference in the compiler config?
Maybe, but it could be more complicated for you, the maintainer, than it's worth!
> JSX is more succinct and efficient than raw DOM API because it's declarative, where the raw API is imperative.
But that's also the difference between (e.g.) Solid's signals vs a plain (proxied) object that's passed around and mutated. I'd go so far as to say that mutable objects are one of the most "imperative" things about JS.
I don't want to recurse into philosophy but one could argue an assignment is more declarative than a function call :) Solid is function calls everywhere, and extra code, vs plain objects.
I am pondering about this, but would love to see an example to make it more concrete. The way I see it is that this reactivity is completely on the compiler's side, and there's no more ambiguity or pitfalls than misrepresenting a dependency array in a react hook.
Should an application framework written in Rust encourage the usage of `unsafe` blocks in application code? It certainly allows for more power and flexibility, and it's supported in the language.
That kind of thing is for someone with governing experience to solve. I suppose revoking them every now and then would be good, and maybe for ease of minting new unique codes they could loosely be tied to location. I'm not too worried about that, as your IP could be used in a similar way and the services shouldn't be reading the codes themselves regularly anyway
The average LLM writes cleaner, better-factored code than the average engineer at my company. However, I worry about the volume of code leading to system-scale issues. Prior to LLMs, the social contract was that a human needs to understand changes and the system as a whole.
With that contract being eroded, I think the sloppiness of testing, validation, and even architecture in many organizations is going to be exposed.
The social contract where I work is that you’re still expected to understand and be accountable for any code you ship. If you use an LLM to generate the code, it’s yours. If someone is uncomfortable with that, then they are leaning too hard on the LLM and working outside of their skill level.
Hey man, if you ever see this, not your fault it ended up on HN early! I'm sure this is solid for your usecase. That's just something that would dissuade me from choosing it!
Hey! I didn't realize this wasn't even posted by the creator of the library. Building shit and launching is hard, don't let it dissuade you from putting stuff out there though!
Also, check out base-ui. Radix is fine, but base ui has generally better primitives and is being actively developed
React is not popular simply because engineers hate themselves or enjoy pain. There are problems it solves, and problems it creates. Explain what problems your solution solves, and feel free to dunk on React while you're at it, but write a tagline like this and I'm not gonna take you seriously.
reply