>How does that work? It doesn't seem obvious to me?
Assume that "-" is the MINUS token. This means that, starting with the first character in "a", we have the token string "A MINUS B", which we can determine with a 3 token lookahead.
Now, if either A or B are reserved keywords, the lexer knows that "A MINUS B" is not correct (unless the grammar happens to allow for a keyword to occur next to a minus sign, which I don't think Java does. If it does, then you just avoid using that keyword when deriving a new one). At this point, the lexer and lex it as "A-B", which is (hopefully) a keyword.
I can't think of any examples, but I feel like I have seen languages reserve take a simmiliar approach where they reserve a prefix, which allows them to create as many new reserved words as they like.
- C has given up on new identifiers save the "reserved namespace", consisting of an underscore followed by an Uppercase, which is how you get _Bool and _Generic. Oof, nuff said.
- C++ has stretched poor `static` to its limits, and now incorporates context-sensitive "identifiers with special meaning": `final` and `override`. No new hard keywords in C++17 AFAIK.
- C# takes this even further with its LINQ-driven contextual keywords.
- JavaScript stumbles about by adding new identifiers, BUT only in strict mode, and even then only sometimes; `yield` is especially ambiguous.
I think C# has only ever added contextual keywords after its initial release and if I remember correctly there was only one breaking change for semantics of source code, which I find impressive. I also didn't get the impression (from working with Roslyn and looking at its source here and there, as well as reading posts from people like Eric Lippert or Mads Torgersen about the language design) that contextual keywords are as much of a hassle as Brian makes them out to be. About the only really annoying one I can think of in C# would be the nameof operator, which has to be parsed as a method invocation and is only the nameof operator when there's no symbol with that unqualified name accessible at that point that can be invoked as a method (e.g. an actual method named nameof (rare), or a local variable of a delegate type). Pretty much all other contextual keywords are only valid in a few places where parsing is not ambiguous. You just happen to have a token there that's still valid elsewhere as an identifier.
It's interesting to read about the musings and decision making processes for different languages, though. I'm sure Java and C# are both designed very carefully, yet with radically different goals and outcomes. And I'm sure both language design teams must ponder pretty much the same issues.
C is a little better than presented here because they add a header to the standard library with a #define in it that gives the new identifier a reasonable keyword; for example, you can access `_Bool` as `bool` if you `#include <stdbool.h>`. I actually think this is a reasonable compromise; you can opt in to the new "keyword" at the compilation-unit-level.
Criminal? Not really. Usually in physical stores on private property 'management reserve the right to refuse admission' and they can do that any (legal) way they feel like, including stating that you can only gain admission once you have completed an impossible task, if that makes them happy.
I attended a talk by Coinbase. They get A LOT of fraudulent activity (both high end and amateur). They have like 7-8 employees who handle all the manual aspects of user verification. So they have to rely heavily on automated solutions.
I have never used them so I don't know how often they flag false positives but based on that talk I would at least qualify the statement with "it's a hard problem!"
Treating your users like shit is actually a pretty easy problem to solve. It's companies that don't 'care' to solve it that cause this kind of strife. Coinbase pulled the same shit with me.
It only left in terms of attention from startups I suppose. I also disagree with the reason the article suggested - SQL could not handle the loads. My opinion is that startups simply liked the idea of not having a schema as it fit their agile approach. So, they went NoSQL because it allowed them to get going faster and change easier.
> Deterministic password generators cannot accommodate varying password policies without keeping state
16 characters + 'Aa$1' has universally satisfied every website I have used to date except Baidu (which imposes a maximum of 16 characters total on passwords). The number of exceptions to this is probably miniscule.
> Deterministic password generators cannot handle revocation of exposed passwords without keeping state
That's what 'n' is for. Either you can keep 'n' as a state variable which is much easier to manage (and if you lose the file, you can try a few values of n and get yourself back into those websites without much hassle), OR sync the values of n every several months on the sites that use it.
> Deterministic password managers can’t store existing secrets
This is orthogonal to the password problem. I store sensitive files that aren't passwords in a GPG-encrypted tarball on Dropbox.
> Exposure of the master password alone exposes all of your site passwords
This is true of stateful password managers as well, if you backup your database on anywhere insecure or any device (e.g. laptop) that could potentially be mugged at gunpoint, confiscated by border control, leaked by buggy software, etc.
I also wonder what approaches other languages have taken since I haven't seen dashed keywords in any of the other big languages.