What is the point of "query privacy" when browsers send host addresses in plaintext (SNI) and destination IPs are still visible to the internet provider?
Layering DNS over TLS (or anything else) is meaningless, it increases RTT (and thus response time) without any benefit for most users.
It's mostly for preventing DNS response integrity I'd say.
Using DNS over HTTPS or over TLS to hide traffic from your ISP is utterly meaningless. I don't know why people are advocating it for 'privacy' from your ISP.
For privacy, one would just use a VPN for all their traffic and using DNS over HTTPS matters much less, given that the DNS resolver is also being routed over the VPN connection (if it does at all).
The only use I see is that if you're visiting a HTTPS website, and it doesn't have HSTS (or if you're visiting a website with HSTS for the first time), it prevents phishing (for less tech-savvy since one would notice that it won't be TLS) people.
This use is further diminished if Firefox and other browsers start implementing the HSTS preloading[1] feature like Chrome, and people actually start submitting their domains for inclusion. Which I don't see happening soon, so it has some use case.
Using O_APPEND does not imply, that kernel needs to purge the pages from cache ASAP, does it? Removing pages from cache may be expensive operation by itself, so I presume, that it is avoided by default.
More importantly, if the disk can not catch up, the log data is going to end up waiting in page cache anyway (typical case of bufferbloat). Linux kernel does not have telepathic abilities to balance needs of crazy logger and other applications in system, so without resolving underlying issue (bufferbloat), those writes would take up too much cache, potentially bringing down disk performance of other applications.
fadvise() may schedule quicker eviction, effectively acting as syscall version of vm.dirty_ratio. Of cause, that does not resolve the problem, — just moves it to different layer. The real solution is either
1) blocking the apps until their logs are fully written (for example, by using O_DIRECT)
2) showing those apps middle finger and throwing away some of their logs (AFAIK, this is occasionally done by syslog).
VM certainly does "call underlying host kernel operation", it just does so indirectly — the guest userspace calls fadvise(), kernel implementation of fadvise() asks the virtio disk driver to perform particular read/writes, the virtio driver asks underlying kernel disk driver to read/write individual disk sectors (without knowing, that they are related to specific file in guest filesystem).
This specific bug was caused by putting high load on "kernel dentry cache", e.g. a contention for memory structure, present in kernel memory. Guests normally don't share memory, so contending for it was avoided.
Incidentally, there are situations, when different guests can compete for same memory — when VM uses so-called "memory deduplication" techniques. Which is why enabling that stuff on production systems may be a bad idea.
Oracle used excuse, that it's current OpenJDK license (GPLv2) is incompatible with license, used by Google's runtime (Apache 2). If Google re-licensed it's Java implementation under GPL, some of arguments, used by Oracle lawyers (code reuse and patent (?) violations), would have been void, and arguing about reuse of APIs would have been a lot harder.
Of course, this does not really matter, because the whole lawsuit is just excuse for power games between corporations. Oracle's goal wasn't about Java licensing, it was about gaining some degree of control over emerging Android ecosystem.
No, the copyright trial is about the "structure, sequence, and organization" of the APIs, not any literal copying anymore. Switching languages, but keeping the same class library still leaves them open.
You are making it sound like "injecting random garbage into HTTP" is some new hotness. It have been done since forever. By the way, — email still works that way. But Google and a couple of other corporations would not like you to trample their email-harvesting business, so there is disproportionately less FUD and fear-mongering being spread around email connections.
Internet providers have been injecting ads into websites for years. Hackers and government have been doing same to executables and other forms of unprotected payload.
Hashes, cryptographic signatures, executables signing, Content-Security-Policy, sub-resource integrity — numerous specifications have been created to address integrity of web. There is no indication, that those specifications failed (and in fact, they remain useful even after widespread adoption of HTTPS).
For the most part, integrity of modern web communication is already controlled even in absence of SSL. The only missing piece is somehow verifying integrity of initial HTML page.
I doubt, that any government is inherently more trustworthy than any other.
It just coincidentally happens, that US controls 100% of root CAs and Kazakhstan (most likely) controls 0. So the later needs more audacious measures, while former can just issue a gag order to Symantec (or whoever is currently active in market).
CA system is inherently vulnerable to government intervention. There is no point in considering defense against state agents in HTTPS vulnerability model. It is busted by default.
I would love if null encryption ciphers actually worked in real life, but they don't (for the same reason why plaintext HTTP/2 does not — everyone disabled them under political pressure).
By the way, — signing is not equal to "null encryption". Signing can be done in advance, once. Signed data can be served via sendfile(). It does not incur CPU overhead on each request. Signing does not require communicating with untrusted parties using vulnerable SSL libraries (which can compromise your entire server).
As we speak, your SSL connection may be tampered with. Someone may be using a heardbleed-like vulnerability in the server or your browser (or both). You won't know about this, because you aren't personally auditing the binary data, that goes in and out of wire… Humorously enough, one needs to actively MITM and record connections to audit them. Plaintext data is easier to audit and reason about.
And how do you sign these requests? How do you get browsers to trust the signature? Oh, well, we already have a similar solution that also protects the entire connection from spying... it's called HTTPS.
So a bunch of centrally controlled monopolies agreed to realign their offerings to maximize profit and gain greater control over end-user.
They also pretend, that compromising 3-months certificate is "ok" (or at least less harmful, than compromising a year-long certificate), when in practice there is no reason to assume so, — 3 months is more than enough for any real-life eavesdropper.
Firstly, CA/B explicitly can't talk about pricing or product offerings, because a group of businesses that collaborate on setting prices or product offerings is called a Cartel and is illegal (the example you're probably thinking of, OPEC, exists because its members are sovereign entities, and thus enjoy total immunity from the law). When they meet in person the CA/B members always begin by reading out the rules that lay out what mustn't be discussed for this reason.
Secondly, the idea is not at all that compromising 3-month certs is "ok". Instead Ryan's focus is on the pace of change. During 2016 CAs agreed to use the Ten Blessed Methods for validation, in 2017 that agreement became a concrete rule (thanks to Mozilla) but a 39 month certificate issued under the prior validation status quo would still be trusted until mid-2020.
Historically what has happened is that there's a grace period, and then CAs are supposed to go back and revoke any certificates still outstanding that break the new rules. But this is error-prone, back in early 2017 you can see the list of violations I found while checking that certificates for now prohibited "internal" names were revoked as required, each CA had excuses for why they'd missed some, but the overall lesson is that things will be mised. So Ryan doesn't want to rely on grace periods, he wants a shorter window of validity for the certificates.
MD5 and SHA-1 is the go-to example for this stuff. We expect already that SHA-2 (e.g. SHA-256 used currently in certificates) will fall the same way as the others, because it's the same construction, so we're going to be doing this again in perhaps 5-10 years. But with 39 month certificates the _minimum_ time from changing the rules to getting rid of the problem is 39 months, if it takes a few months to agree what to do, the total may be closer to 4 years. That's a very long time in cryptographic research, too long to predict what's coming. 90 days would be much better from this perspective.
Layering DNS over TLS (or anything else) is meaningless, it increases RTT (and thus response time) without any benefit for most users.