They are anything but countless. There’s a specific number of unarmed Black people who are killed each year by police. That number is typically less than 30.
You're implying that destroying innocent people's property and starting riots that lead to deaths, including 18 murders in Chicago alone on May 31st, is a justified response to an act of injustice, which is an absolutely irresponsible moral outlook, exhibiting morally elitist entitlement.
In fact, it would be the 'incitement to riot' that the parent comment is referring to, that an impartial application of the proposed rule would identify as worthy of banning.
If you keep posting like that we will ban you. I'm not going to ban you right now because you've also posted some ok comments within recent memory, but the not-ok comments are seriously not ok. No more of this, please.
Sorry, I'll try to not incite flame wars. In my defense, anything related to these kinds of issues is highly likely to rouse emotions. The only way these topics can even brought up without a flame war is if every one agrees, and no one expresses a dissenting viewpoint. Just Food for Thought.
But yes I could certainly have been less provocative.
> anything related to these kinds of issues is highly likely to rouse emotions.
You can say that again! And yet the seemingly innocuous mandate of this site—gratification of intellectual curiosity—actually requires us all to work hard at not succumbing to that dynamic. If you think about it (well, when I think about it), two odd things follow from that: (1) this is a rather larger project than it seems; (2) it's actually doable.
Yes it is doable. I made a mistake in how I dealt with inflammatory comments, in responding in kind instead of reporting it. I wrongly assumed moderation was more lax, and the forum was more of a free-for-all, than they are.
Equating protests with riots is a bit nonsensical. We should consider how the police has treated black people for the last 100+ years, to start. Protests like these have been long overdue.
I don't understand your point. There were widespread riots across the US during the summer of BLM, with many police forces standing down in the face of them.
And your point about black people in no way justifies the indiscriminate violence perpetrated by the rioters.
At what point does brutalizing innocent people become a justified response to injustice?
The answer is at no point.
Any way, from what I recall 350,000 white men died fighting for the Union, in a civil war to end an institution that had existed since the beginning of human culture, and that continued to be practiced all around the world into the 20th century until imperialist powers ended it.
I thought the riots were about the abuse inflicted upon George Floyd. Sorry for not being adequately sensitive. I think you're over-reacting a bit and it shows misplaced priorities.
Your accusation lacks the generosity of a benefit of the doubt.
It could be the fact that George Floyd's death triggered the movement, that his character figured centrally in it, and hashtags of his name were the primary symbol of it.
But instead you take the typical high handed approach of the moral elitist movement.
I don't consider myself a tolerant person, especially when it comes to obtuseness. You are engaging in a bad faith argument by purposefully being obtuse.
If you're acting in good faith, maybe consider your news bubble? Try places other than hackernews, it's demographic is upper middle class white men who are completely engaged to capitalism, which makes for some pretty horrific discussions.
We've banned this account for repeatedly breaking the site guidelines with flamewar, personal attacks, and ideological battle, and ignoring our many requests to stop. Not cool.
If you don't want to be banned, you're welcome to email hn@ycombinator.com and give us reason to believe that you'll follow the rules in the future.
i'm very skeptical about the benefits of a binary JavaScript AST. The claim is that a binary AST would save on JS parsing costs. however, JS parse time is not just tokenization. For many large apps, the bottleneck in parsing is instead in actually validating that the JS code is well-formed and does not contain early errors. The binary AST format proposes to skip this step [0] which is equivalent to wrapping function bodies with eval… This would be a major semantic change to the language that should be decoupled from anything related to a binary format. So IMO proposal conflates tokenization with changing early error semantics. I’m skeptical the former has any benefits and the later should be considered on its own terms.
Also, there’s immense value in text formats over binary formats in general, especially for open, extendable web standards. Text formats are more easily extendable as the language evolves because they typically have some amount of redundancy built in. The W3C outlines the value here (https://www.w3.org/People/Bos/DesignGuide/implementability.h...). JS text format in general also means engines/interpreters/browsers are simpler to implement and therefore that JS code has better longevity.
Finally, although WebAssembly is a different beast and a different language, it provides an escape hatch for large apps (e.g. Facebook) to go to extreme lengths in the name of speed.
We don’t need complicate JavaScript with such a powerful mechanism already tuned to perfectly complement it.
Early benchmarks seem to support the claim that we can save a lot on JS parsing costs.
We are currently working on a more advanced prototype on which we will be able to accurately measure the performance impact, so we should have more hard data soon.
It seems like one big benefit of the binary format will be the ability to skip sections until they're needed, so the compilation can be done lazily.
But isn't it possible to get most of that benefit from the text format already? Is it really very expensive to scan through 10-20MB of text looking for block delimiters? You have to check for string escapes and the like, but it still doesn't seem very complicated.
Well, for one thing, a binary format’s inherent “obfuscatedness” actually works in its favor here. If Binary AST is adopted, I’d expect that in practice, essentially all files in that format will be generated by a tool specifically designed to work with Binary AST, that will never output an invalid file unless there’s a bug in the tool. From there, the file may still be vulnerable to random corruption at various points in the transit process, but a simple checksum in the header should catch almost all corruption. Thus, most developers should never have to worry about encountering lazy errors.
By contrast, JS source files are frequently manipulated by hand, or with generic text processing tools that don’t understand JS syntax. In most respects, the ability to do that is a benefit of text formats - but it means that syntax errors can show up in browsers in practice, so the unpredictability and mysteriousness of lazy errors might be a bigger issue.
I suppose there could just be a little declaration at the beginning of the source file that means “I was made by a compiler/minifier, I promise I don’t have any syntax errors”…
In any case, parsing binary will still be faster, even if you add laziness to text parsing.
a simple checksum in the header should catch almost all corruption
For JavaScript, you have to assume the script may be malicious, so it always has to be fully checked anyway.
It's true that the binary format could be more compact and a bit faster to parse. I just feel that the size difference isn't going to be that big of a deal after gzipping, and the parse time shouldn't be such a big deal. (Although JS engine creators say parse time is a problem, so it must be harder than I realise!)
> For JavaScript, you have to assume the script may be malicious, so it always has to be fully checked anyway.
The point I was trying to make isn't that a binary format wouldn't have to be validated, but that the unpredictability of lazy validation wouldn't harm developer UX. It's not a problem if malicious people get bad UX :)
Anyway, I think you're underestimating the complexity of identifying block delimiters while tolerating comments, string literals, regex literals, etc. I'm not sure it's all that much easier than doing a full parse, especially given the need to differentiate between regex literals and division...
I was figuring you could just parse string escapes and match brackets to identify all the block scopes very cheaply.
Regex literals seem like the main tricky bit. You're right, you definitely need a real expression parser to distinguish between "a / b" and "/regex/". That still doesn't seem very expensive though (as long as you're not actually building an AST structure, just scanning through the tokens).
Automatic semicolon insertion also looks fiddly, but I don't think it affects bracket nesting at all (unlike regexes where you could have an orphaned bracket inside the string).
Overall, digging into this, it definitely strikes me that JS's syntax is just as awkward and fiddly as its semantics. Not really surprising I guess!
Early error behavior is proposed to be deferred (i.e. made lazy), not skipped. Additionally, it is one of many things that require frontends to look at every character of the source.
I contend that the text format for JS is no way easy to implement or extend, though I can only offer my personal experience as an engine hacker.
Indeed it's a semantic change. Are you saying you'd like that change to be proposed separately? That can't be done for the text format for the obvious compat reasons. It also has very little value on its own, as it is only one of many things that prevents actually skipping inner functions during parsing.
Our goal is not to complicate Javascript, but to improve parse times. Fundamentally that boils down to one issue: engines spend too much time chewing on every byte they load. The proposal then is to design a syntax that allows two things:
1. Allow the parser to skip looking at parts of code entirely.
2. Speed up parsing of the bits that DO need to be parsed and executed.
We want to turn "syntax parsing" into a no-op, and make "full parsing" faster than syntax parsing currently is - and our prototype has basically accomplished both on limited examples.
> JS text format in general also means engines/interpreters/browsers are simpler to implement and therefore that JS code has better longevity.
As an implementor, I have to strongly disagree with this claim. The JS grammar is quite complex compared to a encoded pre-order tree traversal. It's littered with tons of productions and ambiguities. It's also impossible to do one-pass codegeneration with the current syntax.
An encoding of a pre-order tree traversal is not even context-free (it can be implemented on top of a deterministic PDA). It literally falls into a simpler class of parsing problems.
> The binary AST format proposes to skip this step [0] which is equivalent to wrapping function bodies with eval…
This really overstates the issue. One can equally rephrase that statement as: if you are shipping JS files without syntax errors, then the behaviour is exactly identical.
That serves to bring to focus the real user-impact of this: developers who are shipping syntactically incorrect javascript to their users will have their pages fail slightly differently than their pages are failing currently.
Furthermore, the toolchain will simply prevent JS with syntax errors from being converted to BinaryJS, because the syntactic conversion is only specified for correct syntax - not incorrect syntax.
The only way you get a "syntax" error in BinaryJS is if your file gets corrupted after generation by the toolchain. But that failure scenario exists just the same for plaintext JS: a post-build corruption can silently change a variable name and raise a runtime exception.
So when you trace the failure paths, you realize that there's really no new failure surface area being introduced. BinaryJS can get corrupted in the exactly the same way with the same outcomes as plaintext JS can get corrupted right now.
Nothing to worry about.
> We don’t need complicate JavaScript with such a powerful mechanism already tuned to perfectly complement it.
We need to speed up Javascript more, and parsing is one of the longest standing problems, and it's time to fix it so we can be fast at it.
Wasm is not going to make regular JS go away. Codebases in JS are also going to grow. As they grow, the parsing and load-time problem will become more severe. It's our onus to address it for our users.
This architecture switch will have no impact on the V8 API. We're working closely with the Node.js team to make sure that this is a smooth transition and early results look as though there are significant performance benefits [0].
We're very interested in exploring this space. It's possible that this capability is exposed through an extension of sourcemaps [0]. Definitely curious to hear more ideas and feedback in this space.
It summarizes the discussion so far between various stakeholders. I've since moved on to other things, and I don't know where the effort currently stands.
The main benefit would be shipping a single WebAssembly module to target client and server (e.g. the same benefit of sharing JS between the browser and Node).
As pointed out below, technically "ECMAScript® 2015" is "ECMA-262 6th Edition", so the numbers still exist in some form. It's a really difficult balance between readability, matching most-common usage in the community (e.g. Kangax still uses ES6), and trying not to mix nomenclatures in the process.
The gutters are a little small on my iPhone, but I can see all the text. Want to send a screenshot to seththompson [at] google [dot] com? I'll see how much I can fix.