Hacker Newsnew | past | comments | ask | show | jobs | submit | ongy's commentslogin

Crypto wise, fips is outdated but not horrible.

Actual fips compliant (certified) gives you confidence in some basic competence of the solution.

Just fips compatible (i.e. picking algos that could be fips compliant) is generally neutral to negative.

I'm not 100% up to date, so that might have changed, but AEAD used to be easier if you don't follow fips than fips compatible. Still possible, but more foot guns due to regulatory lag in techniques.

Overall, IMO the other top-level comment of "only fips if you have pencil pusher benefit" applies.


FIPS-140 allowed encryption using 3DES up until Jan 1 2024, and allowed certification of modules containing SHA-1 through the end of 2025. There is some transition-timeline nuance involved, but those examples are in general pretty horrible from a security perspective.

I love fastmail, but I really wish they had servers close to me.

The high ping kills the throughput on davfs and makes their website hosting a pain to update :(


Where abouts are you located?

Smack center of Europe (southern Germany) Got >100ms pings.

Heh I’m sorry to hear that. The whole internet is that slow for us here in Australia.

I'm aware. I'm worried we'll get an Aussie customer at work and I have to fix their access to our systems...

Granted, we already have US/EU/Asia as distinct regions. AUS would just make fail over even worse.


Why do you think it has too many children? If we are talking direct descendents, I have seen way larger directories in file systems (git managed) than I've ever seen in an AST.

I don't think there's a limit in git. The structure might be a bit deep for git and thus some things might be unoptimized, but the shape is the same.

Tree.


Directories use the `tree` object type in git whereas files use `blob`. What I understand you to suggest is using the tree nodes instead of the blob nodes as the primary type of data.

This is an interesting idea for how to reuse more of git's infrastructure, but it wouldn't be backwards compatible in the traditional sense either. If you checked out the contents of that repo you'd get every node in the syntax tree as a file, and let's just say that syntax nodes as directories aren't going to be compatible with any existing tools.

But even if I wanted to embrace it I still think I'd hit problems with the assumptions baked into the `tree` object type in git. Directories use a fundamentally different model than syntax trees do. Directories tend to look like `<Parent><Child/></>` while syntax trees tend to look like `<Person> child: <Person /> </>`. There's no room in git's `tree` objects to put the extra information you need, and eventually the exercise would just start to feel like putting a square peg in a round hole.

Instead of learning that I should use exactly git's data structure to preserve compatibility, I think my learning should be that a successful structure needs to be well-suited to the purpose it is being used for.


Your pseudo XML seems quite broken, since the supposed git style doesn't close the parent at all.

But the git directory entry contains: * a type (this one is quite limited, so I'm not sure how well that could be (ab)used * a name * a pointer to the content

Which is exaclty what an AST entry has.


The pseudo-XML is a language I made called CSTML: https://docs.bablr.org/guides/cstml. I looked back and I don't see any unclosed tags, nor anything that would be an unclosed tag in XML either.

I'm sure you could abuse a git `tree` to squish in the extra data, but my point was just that you'd have to because a directory doesn't have a name that's separate from the name its parent uses to point to it. An AST node has both a name that its parent uses to point to it and an named identity e.g.:

``` <BinaryExpression> left: <Number '2' /> op: <'+'> right: <Number '2' /> </> ```

So my point is that to fit this into git you'd have to do something funky like make a folder called `left_Number`, and my question about this is the same question as I have in the first place about creating a folder on disk named `Number` whose contents are only the digit `2`. Since every existing tool will present the information as overwhelming amounts of nonsense compared to what users are used to seeing, has any compatibility at all been created? What was the point?

I also see the need to check out files as being an aspect of Git that relates purely to its integration with editors through flat text files. But if git was a more of a database than a filesystem then it's fair to assume that you'd prefer to integrate database access directly into the IDE.


That black hole behavior is a result of corporate processes though.

Not a result of git.

Business continuity (no uncontrolled external dependencies) and corporate security teams wanting to be able to scan everything. Also wanting to update everyone's dependencies when they backport something.

Once you got those requirements, most of the benefits of multi-repo / roundtripping over releases just don't hold anymore.

The entanglement can be stronger, but if teams build clean APIs it's no harder than removing it from a cluster of individual repositories. That might be a pretty load bearing if though.


What issues do you see in git's data model to abandon it as wire format for syncing?


I wouldn't say I want to abandon anything git is doing as much as evolve it. Objects need to be able to contain syntax tree nodes, and patches need to be able to target changes to particular locations in a syntax tree instead of just by line/col.


An AST is a tree as much as the directory structure currently encoded in git.

It shouldn't be hard to build a bijective mapping between a file system and AST.


Right, but for what purpose? I don't see much gain, and now you're left trying to fit a square peg into a round hole. The git CLI would be technically working, but not practically useful. Same with an IDE: if you checked the files out you could technically open them but not easily change your program.


the git server would continue to work.

The cli really isn't the greatest either way. But there's lots of infrastructure to make the sharing work reasonably well.


You could also think of it this way: if I abuse git's protocols to do my thing, I pretty much give up my chance to become a standard in the future. If people are going to adopt a new standard they want it to be clean: you don't want to be trying to sell a greenfield technology that's loaded up with legacy tech debt. Making a data format that renders broken in every existing tool that touches it would be showing a lot of bad faith up front.


How does that prevent the ID service from discovering which services you use it for?


You could do some scheme that hashes a site specific identifier with an identifier on the smart element of the id.

If that ever repeats, the same I'd was used twice. At the same time, the site ID would act as salt to prevent simple matching between services.


People do, in fact, have multiple profiles. For very valid reasons.


the solution to this seems to be to issue multiple "IDs". So essentially the government mints you a batch of like 30 "IDs" and you can use each of those once per service to verify an account (30 verified accounts per service). That allows for the use case of needing to verify multiple accounts without allowing you to verify unlimited accounts (and therefor run into the large scale misuse issue I pointed out).

If you need to verify even more accounts the government can have some annoying process for you to request another batch of IDs.


This is a solved problem in the authentication space. Short lived tokens backed by short lived keys.

A token is generated that has a timestamp and is signed by a private key with payload.

The public key is available through a public api. You throw out any token older than 30 seconds.

Unlimited IDs.

That's basically what you want.


Which either allows to use a fingerprint of the signing key to be used for the same.

Or would open the system up to the originally posted attack of providing ~an open relay.


Which hotel asks for id online..? I've only ever had to provide it once on-site and checking in.

And when then, only when I'm in foreign countries.


Happens quite often with Airbnb for example. You often don't meet the host in person so there's no way to show them a physical ID.


Ahh. The not-quite-a-hotel. I don't think I ever used them.


My main issue is trust.

In real world scenarios, I can observe them while they handle my ID. And systematic abuse(e.g. some video that gets stored and shows it clearly) would be a violation taken serious

With online providers it's barely news worthy if they abuse the data they get.

I'm not against age verification (at least not strongly), but I'd want it in a 2 party 0 trust way. I.e. one party signs a jwt like thing only containing one bit, the other validates it without ever contacting the issuer about the specific token.

So one knows the identity, one knows the usage But they are never related


> So one knows the identity, one knows the usage But they are never related

I could be wrong but I think this is how the system we have in place in Italy works. And I agree that it's how it should work.


No printer.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: