Actual fips compliant (certified) gives you confidence in some basic competence of the solution.
Just fips compatible (i.e. picking algos that could be fips compliant) is generally neutral to negative.
I'm not 100% up to date, so that might have changed, but AEAD used to be easier if you don't follow fips than fips compatible. Still possible, but more foot guns due to regulatory lag in techniques.
Overall, IMO the other top-level comment of "only fips if you have pencil pusher benefit" applies.
FIPS-140 allowed encryption using 3DES up until Jan 1 2024, and allowed certification of modules containing SHA-1 through the end of 2025. There is some transition-timeline nuance involved, but those examples are in general pretty horrible from a security perspective.
Why do you think it has too many children? If we are talking direct descendents, I have seen way larger directories in file systems (git managed) than I've ever seen in an AST.
I don't think there's a limit in git. The structure might be a bit deep for git and thus some things might be unoptimized, but the shape is the same.
Directories use the `tree` object type in git whereas files use `blob`. What I understand you to suggest is using the tree nodes instead of the blob nodes as the primary type of data.
This is an interesting idea for how to reuse more of git's infrastructure, but it wouldn't be backwards compatible in the traditional sense either. If you checked out the contents of that repo you'd get every node in the syntax tree as a file, and let's just say that syntax nodes as directories aren't going to be compatible with any existing tools.
But even if I wanted to embrace it I still think I'd hit problems with the assumptions baked into the `tree` object type in git. Directories use a fundamentally different model than syntax trees do. Directories tend to look like `<Parent><Child/></>` while syntax trees tend to look like `<Person> child: <Person /> </>`. There's no room in git's `tree` objects to put the extra information you need, and eventually the exercise would just start to feel like putting a square peg in a round hole.
Instead of learning that I should use exactly git's data structure to preserve compatibility, I think my learning should be that a successful structure needs to be well-suited to the purpose it is being used for.
Your pseudo XML seems quite broken, since the supposed git style doesn't close the parent at all.
But the git directory entry contains:
* a type (this one is quite limited, so I'm not sure how well that could be (ab)used
* a name
* a pointer to the content
The pseudo-XML is a language I made called CSTML: https://docs.bablr.org/guides/cstml. I looked back and I don't see any unclosed tags, nor anything that would be an unclosed tag in XML either.
I'm sure you could abuse a git `tree` to squish in the extra data, but my point was just that you'd have to because a directory doesn't have a name that's separate from the name its parent uses to point to it. An AST node has both a name that its parent uses to point to it and an named identity e.g.:
So my point is that to fit this into git you'd have to do something funky like make a folder called `left_Number`, and my question about this is the same question as I have in the first place about creating a folder on disk named `Number` whose contents are only the digit `2`. Since every existing tool will present the information as overwhelming amounts of nonsense compared to what users are used to seeing, has any compatibility at all been created? What was the point?
I also see the need to check out files as being an aspect of Git that relates purely to its integration with editors through flat text files. But if git was a more of a database than a filesystem then it's fair to assume that you'd prefer to integrate database access directly into the IDE.
That black hole behavior is a result of corporate processes though.
Not a result of git.
Business continuity (no uncontrolled external dependencies) and corporate security teams wanting to be able to scan everything.
Also wanting to update everyone's dependencies when they backport something.
Once you got those requirements, most of the benefits of multi-repo / roundtripping over releases just don't hold anymore.
The entanglement can be stronger, but if teams build clean APIs it's no harder than removing it from a cluster of individual repositories.
That might be a pretty load bearing if though.
I wouldn't say I want to abandon anything git is doing as much as evolve it. Objects need to be able to contain syntax tree nodes, and patches need to be able to target changes to particular locations in a syntax tree instead of just by line/col.
Right, but for what purpose? I don't see much gain, and now you're left trying to fit a square peg into a round hole. The git CLI would be technically working, but not practically useful. Same with an IDE: if you checked the files out you could technically open them but not easily change your program.
You could also think of it this way: if I abuse git's protocols to do my thing, I pretty much give up my chance to become a standard in the future. If people are going to adopt a new standard they want it to be clean: you don't want to be trying to sell a greenfield technology that's loaded up with legacy tech debt. Making a data format that renders broken in every existing tool that touches it would be showing a lot of bad faith up front.
the solution to this seems to be to issue multiple "IDs". So essentially the government mints you a batch of like 30 "IDs" and you can use each of those once per service to verify an account (30 verified accounts per service). That allows for the use case of needing to verify multiple accounts without allowing you to verify unlimited accounts (and therefor run into the large scale misuse issue I pointed out).
If you need to verify even more accounts the government can have some annoying process for you to request another batch of IDs.
In real world scenarios, I can observe them while they handle my ID.
And systematic abuse(e.g. some video that gets stored and shows it clearly) would be a violation taken serious
With online providers it's barely news worthy if they abuse the data they get.
I'm not against age verification (at least not strongly), but I'd want it in a 2 party 0 trust way.
I.e. one party signs a jwt like thing only containing one bit, the other validates it without ever contacting the issuer about the specific token.
So one knows the identity, one knows the usage
But they are never related
Actual fips compliant (certified) gives you confidence in some basic competence of the solution.
Just fips compatible (i.e. picking algos that could be fips compliant) is generally neutral to negative.
I'm not 100% up to date, so that might have changed, but AEAD used to be easier if you don't follow fips than fips compatible. Still possible, but more foot guns due to regulatory lag in techniques.
Overall, IMO the other top-level comment of "only fips if you have pencil pusher benefit" applies.
reply