Hacker Newsnew | past | comments | ask | show | jobs | submit | Yrlec's commentslogin

Just hold the bond to maturity.


A $100 in 10 years is just not worth $100 today in the current interest rate environment.


But that’s not an option if you have to pay out now.


That's a liquidity crisis


Let’s say I owe $100.

If I have assets worth $110 today but they are locked up, and I have to pay back what I owe today, then I’m facing a liquidity crisis.

If I have assets worth $90 today, but $110 in a few years, and I have to pay back the money I owe in a few years, then I’m solvent and everything is good.

If I have assets worth $90 today, but $110 in a few years, and I have to pay back what I owe today then I’m insolvent.


It's insolvency if the current market value (not the hold to maturity value) of the assets is less than the liability. As far as I can tell though, SVB was solvent despite its losses, and just needed to raise money to cover reserve requirements after it realized the losses. What did it under was a lack of liquidity after everyone panicked and did a run on the bank, with 45 billion (out of ~175 billion in deposits) in withdrawals overnight.


There's no reserve requirements


It started as a liquidity crisis (do they had to start selling or raise capital), it turned into a solvency crisis (they had to sell even more therefore marking assets as available for sale which made them marked to market).

Right now SVB, if fully liquidated, cannot repay all of the deposits.


People aren't waiting ten years for their money.


I’m inclined to say they will if it guarantees they get it back.


But a dollar in 10 years is worth less than a dollar today. In either case the result is the same: you take a haircut on the present value of your deposits.


No but $1 in ten years will be worth more than $0.50 today.


Specifically, it's worth about $0.65 today.


Now is a good time to point out that the SLA of Google Cloud Storage only covers HTTP 500 errors: https://cloud.google.com/storage/sla. So if the servers are not responding at all then it's not covered by the SLA. I've brought this to their attention and they basically responded that their network is never down.


Ironically I can't read that page because, since it's Google-hosted, I'm getting an HTTP 500 error... but which means at least that service is SLA-covered...

Cloud services live and die by their reputation, so I'd be shocked if Google ever tried to get out of following an SLA contract based on a technicality like that. It would be business suicide, so it doesn't seem like something to be too worried about?


Doesn't surprise me. The value of their ad products is deteriorating rapidly. For instance just today we found that one of our UAC campaigns, which supposedly is super smart and will automatically find the best channel for you decided to switch to a new channel with 14x higher CPC than the other channel that was working very well. That new CPC was 2x our target CPI. Why on earth they don't put in limits to prevent stupid bids like that is beyond me.


(Going back over a decade) I guessed Google was under-monetizing, and then they could just keep slowly turned the knob, and the indicator was when they couldn't turn that monetization knob any more.

Google's Ad platform is at this point now. It has turned in to this jumble of UI anti-patterns and dark patterns. Each successive rollout of a new feature is basically, add more text we can test and relinquish your bidding control to us. At a slow, incremental pace this works. Eventually, however, they max out either what the advertiser can budget, or remove enough of the advertiser's ROI that it doesn't work any more. Then the risk becomes, rather than failing to grow revenue, that they actually collapse it.

One of the things that stuck with me the most, was many years ago Yahoo would go in and change the text of your search. That was egregious on many levels. The non-self service display ad networks would make you jump through a bunch of hoops to run a campaign and then make it as hard as possible to stop the campaign without it being fraud. Adwords, on the other hand had none of these problems. It worked fantastically. Not anymore.


Agreed, even on boring old paid search the new search suggestions are driving me crazy and I can't see a way to easily turn them off?

It's changed some very niche campaigns from CPC bidding to CPA bidding and performance dropped by about 2/3rds until I noticed and switched back. But it's trying to auto apply CPA bidding again. Losing the will to live fighting the endless BS Google throws at ads now.


Isn't great that exact match doesn't mean exact match? Or, isn't it great that your ads and landing pages routinely outperform the supposed Quality Score? Isn't it great that...


Happy to hear that, I've been waiting for it! Makes it much easier to make sure I have it on all my devices. Love Leonardo!


You can quantize it to get the parameters down to 8 bits.


Actually even 1 bit might be enough.


Given the big mess they had when Google Drive migrated to Google Play subscriptions I wouldn't exactly say that they're good at it. http://imgur.com/a/gzkZ5


I'd be very surprised if Dropbox manages to IPO at $10B valuation. The growth simply isn't there. They're big on platforms and markets with no growth but they're getting killed in growth markets. E.g. my cloud storage startup Degoo recently surpassed Dropbox on the Android grossing rank in India (http://imgur.com/HsII1KK) and we're only 5 people.


Their revenue according to the article is $750 million a year. You really think that their P/E ratio is less than 13? Most tech companies trade at a P/E between 25 and 30 these days. I wouldn't be surprised if DropBox's valuation was around $20B.


As the other response said, you're confusing revenue with earnings.

Very few companies have a revenue multiple of 13x. Looking at this chart https://blog.intercom.com/wp-content/uploads/2014/07/4-forwa... it looks like Dropbox can expect something in the 6x-10x range, valuing them at $4.5-7.5B.


Seems that you are correct - earnings refers to profit (bottom line) rather than reveneu (top line). Thanks!

That being the case, DropBox probably isn't profitable yet which would throw their P/E through the roof and make that a useless metric.

Box is already public and it has revenue around $300 million and a valuation of ~$1.9B. DropBox will probably get a better revenue multiplier since their customer acquisition cost is lower than and their overall network effects and future outlook is better than Box. I'd place DropBox at a ~$6B valuation. Hopefully their revenue grows enough between now and IPO to support something closer to $10 or $12B.


I think you are confusing revenues with earnings...


1% of ~2 roughly billion users = 20 million users. With one request every 5 years that'd be roughly 10k support requests per day. Considering that a lot of the cases can be answered very quickly (or possibly automatically) you can easily handle that with a couple hundred employees. The cost of that would be roughly 1/1000 of Google's annual income.


> The cost of that would be roughly 1/1000 of Google's annual income.

I don't own Google stock but 0.1% of revenue would be a huge number that you just can't blink away. Especially when it is not a one-off whim I mean spectacular moonshot but rather an ongoing business obligation.

I hate that the service sucks but I'd rather they don't waste money like this.


Also on GCS, if you do a HEAD after DELETE on a bucket that is under lifecycle management it returns 200 instead of 404. Not really a consistency issue but it can really come and bite if you if you're not aware of it. GET returns 404 but HEAD returns 200.

I reported it as a bug but Google said it was by design. More specifically they said: "You are correct, if the versioning enabled in your bucket then the object metadata is saved as an archive object in the bucket [1].This is the reason you are getting 200 for your HEAD request."


If you consider to use Google Cloud Platform it's important to know that their SLA is practically useless. It only includes requests with HTTP Status code 500. If the system is not responding at all it's not covered by the SLA. See their definition of "Error rate": https://cloud.google.com/storage/sla

This is not just a theoretical issue. In the past week we've been doing a bit more than 5 request/sec to Google Cloud Storage and according to NewRelic the average response time was 8 seconds! I.e. the service has been down and not been responding at all for large periods of time. I've been in contact with their support team and they've refused to reimburse us anything.


Are you by any chance using Nearline or Coldline storage? These offer lower average access time in exchange for a steep pricing discount: https://cloud.google.com/storage/docs/storage-classes

If not, drop me a line at jani at google dot com with a reference to your support case and I'll be happy to take a second look. (Yes, I work in Google Cloud Support.)

That said, we have very recently (as in, late October [1]) introduced a new pricing model for GCS with the explicit goal of reducing latency, and the SLA may be due for an update accordingly. I'll look into this.

[1] https://cloudplatform.googleblog.com/2016/10/introducing-Col...

Also, the HTTP 500 thing is specific to GCS only, other services like GCE [2] define downtime more broadly as "loss of external connectivity or persistent disk access".

[2] https://cloud.google.com/compute/sla


We use Multi-regional, Regional, DRA, Nearline and Cloudline. NewRelic doesn't differentiate between the buckets. It only provide an average across all request to storage.googleapis.com. However, since you now are promising sub-second access time for all storage classes it still wouldn't explain it.

Don't you agree that it's odd to only include HTTP 500 errors in the error rate? Let's say someone hacks your DNS servers and points storage.googleapis.com to 127.0.0.1. Then the entire service would be down completely but according to your SLA you'd have 100% up time.


I asked Google's support team the same thing regarding the SLA not including situations when the system would not be responding to any requests at all. This was there response: "please understand that these SLA's are meant to cover backend issues on our end. In your scenario, we would have no control over our DNS server getting hacked. I apologize if there was confusion caused."

So Google claims that it does not have control over its own DNS servers and is therefore not to blame if the DNS is pointing to the wrong IP. Not very reassuring.


If your product's users have unreliable connections, a GCS connection timeout might be a failure of their connections rather than GCS itself.

If a mobile app can't connect to GCS it could be that GCS is down - but more likely the user just has a weak signal.


These numbers are from AWS EC2 instances. Not from mobile app users.


It sounds like your problem is non-trivial, and is currently being diagnosed by support. Hopefully they get to the bottom of it and find the root cause very soon.

Unfortunately these things can occur in the darndest of places, as a bug in Google CLoud, an incident at GCS, or maybe even in your monitoring stack. I would encourage you to hold off judgement until root cause is identified.

One assurance I can make is that Google SRE monitors these things very carefully 24/7, and such levels of latency in the service would be treated as an incident. So it's likely something else is going on.

(work at Google Cloud, but not on GCS or support)


Thanks for the follow up. This has been an open support case since October 15th. Regardless of the reason of the issue I find it absurd that only HTTP 500 errors are covered by the SLA.


IMO: In quite many cloud services the SLA is only useful as some kind of vague indicator on what the system was designed for. The reason I'm saying this is that quite often you just get a small amount of service credits if the SLA is not met.

For many SaaS businesses the service credits are quite useless, because you provide so much value on top of the cloud services you purchase. You pay $1 for cloud and charge $50 from your customer for your app. If cloud is down, you get $0.10 as credits and need to credit $5 for your own customer (in good case).

(I'm not blaming the cloud providers for this. If they would offer better terms, they would need to anyways transfer the risk to their customers and significantly raise the prices or take the risk of going bankrupt in case of major problems).


I'd be interested to hear more details, this doesn't square with personal experience, unless it was one of the delayed availability classes like someone else mentioned.


Here are the GCS response times according to NewRelic: http://imgur.com/l1dj1Mx


We are also calling S3 from our servers. S3 is receiving more requests and has had 0 issues. This is the corresponding NewRelic data for S3: http://imgur.com/HjH4f0Q


Is that 8 seconds to first byte, or 8 seconds for the complete body?

No SLA I've seen guarantees a time for full body because that time fluctuates too much with both the size of the object and the current state of the internet. The new-ish refresh of the GCS lineup of services says you get sub-second access, but that has to be time to first byte, and I have a hunch that NewRelic shows you time to last byte.

If my assumptions are accurate, I would say the data you get from NewRelic does not warrant reimbursement from Google, though I might side with you if all of your objects are tiny.


I used to work at New Relic and I think that is time to last byte. However, it would be "time to send last byte to his app" not "time to send last byte to his end users" from that view. The missing piece of the equation from his original post is the average size of the payload which would enable us to do more than speculate...


This also occurs for small payloads (e.g. list operations). Google's support team has acknowledged that the problem was on their end. They sent us this message: "I wanted to let you know we have some more information regarding the root cause of the issue you faced. Further investigation with our engineering team confirmed that the issue was caused by a provisioning error in the internal Cloud Storage infrastructure that led to low performance and “Service Unavailable” errors when handling uploads to the US region."

Unfortunately the problem is still occurring after I got this message (although less frequently).


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: