If they had renters, they could pass those savings on. The article discusses how landlords are keeping units empty because it has become too risky/expensive to rent them.
When the law has become so restrictive that people would rather not engage in commerce at all, then it is broken. This isn't helping anybody.
(Tedious disclaimer: my opinion only, not speaking for anybody else. I'm an SRE at Google.)
Performance. gRPC is basically the most recent version of stubby, and at the kind of scale we use stubby, it achieves shockingly good rpc performance - call latency is orders of magnitude better than any form of http-rpc. This transforms the way you build applications, because you stop caring about the costs of rpcs, and start wanting to split your application into pieces separated by rpc boundaries so that you can run lots of copies of each piece.
I cannot sufficiently explain how critical this is to the way we build applications that scale.
I'm a former Google engineer working at another company now, and we use http/json rpc here. This RPC is the single highest consumer of cpu in our clusters, and our scale isn't all that large. I'm moving over to gRPC asap, for performance reasons.
I'll just point out that in the UK and most of Europe we don't see a reason why this should be reciprocal. You can quit at any time, but you also can't be fired without a reason if you've worked at a company for more than 1-2 years.
It doesn't stop people from firing bad coworkers, and it appears to have no negative effects on employment.
> This is mostly to provide documentation in the event of a wrongful termination suit.
While it might serve that purpose, it's primarily to make sure that middle-management takes reasonable steps to let people correct and thinks the decision through before acting. Nobody wants to work in a place where people get surprisingly or randomly fired, or where a manager is firing all the people they don't like. Having processes (mostly) prevents that sort of thing from happening.
(Tedious disclaimer: my opinion only, not speaking for anybody else. I'm an SRE at Google. My team is oncall for this service and I know exactly what happened here; I probably can't answer most questions you might have.)
Let's go with "yes", as the most accurate answer. As soon as I or whoever is oncall has figured out what change was responsible, we can usually revert it quickly and easily. Usually, if I'm oncall and I have reason to even suspect a recent change might be the cause, I'll revert it and see if the problem goes away.
The difficulty becomes more apparent when you realise the sheer number of infrastructure changes being made every hour, some of which will be fixes to other outages, and some of which will be things you can't revert because they are of the form "that location has fallen offline; probably lost networking" or "we are now at peak time and there are more users online". So if your question is "can we just roll the whole world back one day" - no, too much has changed in that time.
(Tedious disclaimer: my opinion only, not speaking for anybody else. I'm an SRE at Google. My team is oncall for this service and I know exactly what happened here; I probably can't answer most questions you might have.)
> Perhaps your architecture wouldn't "compile" if the network traffic will go the wrong place, or if a rate limit is above the capacity something is expected to handle, or if the change would impact too many servers at once.
So in the first instance, I tend to like this sort of idea. However: we are already substantially ahead of the sort of things that you're thinking of.
Full static simulation of a system as complicated as all the components involved here is... well, I can sort of see how it could be done, but it would be a herculean effort; I don't think it would ever be good enough to catch cases like this the first time they happen. There are systems where this sort of thing can be done, but all the ones I can think of are much smaller in scope.
To fully answer questions like "how much traffic will go in this direction?" you need your analysis to include a simulation of what the entire internet is doing. That's hard.
I can't talk about the details, but you can assume that "static analysis" of the form being talked about here is something we've already done, and it's not enough to handle cases this complicated.
> The value is immeasurable, it's just that finance can't get a piece.
Plenty of people make money from this: your ISP, the utility company that pulls cables under the road, the construction company that dug up the road to put them in, the owner of the site where your ISP's network equipment is located, and many of the websites that you interact with.
I believe that the source of the confusion here comes from trying to slice the world into "VC monetization" and "human value", as if these were different or opposed things. I have a different way to look at this space which reveals useful insights:
We see lots of technologies go past that people seem to be excited about, but which then fail in the market. It is currently popular to imply that this means "the market" is some alien thing which is not aligned with what people want. A more realistic view is that people have multiple levels of interest. We can order some of them from lowest to highest:
- willing to read an article
- willing to write a comment on the article
- willing to blog about the technology
- willing to open their wallet
- willing to pay the full cost of making it
What we see is that a lot of technologies can only reach levels 2 through 4: people are interested, but not interested enough to cover the cost of making it. By any reasonable standard, that means we shouldn't make the thing: its value to people is less than the value of the raw materials that went into it. "Failed in the market" is a way of summarising this decision, but it gets a lot of negative press because it hides all the details so people don't understand the value comparison being made here.
The neatest mnemonic to think about this is "money is the unit of caring: you can measure how much people care about a thing happening by measuring how much money they are willing to spend on it".
"VC monetization" fits neatly into this picture: VC want to know more or less immediately if people are going to reach interest level 4 or 5 on this scale. They do not want to burn time and money on things which can only reach level 3: those things never had a future. You cannot tell the difference without asking people to open their wallets.
I do not believe this statement to be correct: it seems entirely possible for this to be done via reckless incompetence, rather than criminal fraud. All it takes is for somebody to calculate the risk incorrectly and everybody else to fail to check their calculations.
It may involve criminal fraud, but it is also possible that it does not.
(Tedious disclaimer: my opinion only, not speaking for anybody else. I'm an SRE at Google. I don't know what's going on in this particular case, and I don't really want to, because it's probably a huge pile of lawyers.)
> It is 'reasonably possible' for Google to give this person access to his data.
This is not necessarily true. For example, the data might contain material which is illegal to distribute. I'm not sure what can be done in a case like that.
Sure, but unless the guy has a massive stockpile of CP that he was storing on Google's servers, how difficult would it be to let him access his data sans the illicit material? It's not as though a handful of illegal pictures somehow taints every email, blog post, and story he's ever uploaded so as to make them all contraband.
> how difficult would it be to let him access his data sans the illicit material?
It's not my service, and I'm talking about the general case rather than this specific one, but that seems to me like it would be pretty complicated - you'd need some sort of review procedure to determine what material can and cannot be distributed, you'd need to somehow do this while preserving user privacy, and you'd need some engineering work to make all this possible.
A project of that scope could take weeks or months to complete, depending on the amount of data involved.
You seem to be talking about some specific incident, and guessing about what happened. We're discussing the general case of what you do when you've got a large collection of data that isn't yours, and all you know is that some part of it can't be distributed.
>You seem to be talking about some specific incident, ...
I thought it was obvious from context that I was referring to the specific case in the article.
Anyway, I don't think the general case you're describing makes sense or is very realistic. It's impossible to know some part of the data collection can't be distributed without somebody or something looking at it. Either the person reporting it, or the algorithm flagging it should be able to identify the specific data items that can't be distributed.
> This is not necessarily true. For example, the data might contain material which is illegal to distribute. I'm not sure what can be done in a case like that.
In this case the best decision is probably to stop distributing them to the world but still enable the original owner/uploader to download them.
When the law has become so restrictive that people would rather not engage in commerce at all, then it is broken. This isn't helping anybody.