Postgres is really good at a lot of things, but it's very unfortunate that it's really bad at simple analytics. I wish there was a plugin instead of having to have N databases
(ParadeDB maintainer here) Yes! We've already built some faster analytics in Postgres, and have a lot more coming. Here's some relevant context in case you're curious: https://www.paradedb.com/blog/faceting
I never got this in the comparison of aws between gcp. Why do people need direct support that much?
In 8 years, I had to reach out to GCP maybe twice and still got an answer anyway.
We've only raised a handful of support cases with GCP the past 5 years, but we happened to raise one this week and they've put us onto a preview feature that solves the problem we were facing - I'm suddenly wondering if we should be trying our luck with support more often instead of figuring it out ourselves.
I found two separate bugs in GCP products. One with gVisor where it would sometimes null-truncate large network packets (this was very hard to diagnose – why is my JSON full of null bytes?) and one where Cloud Run broke sudo sporadically (sudo in a FaaS is definitely niche, I had essentially containerized a very old application written by undergraduates).
Both times they were serious production bugs that took at least a week to resolve, though I only had the lowest tier of support package.
With my experience it’s the edge cases. The few times I had to reach out to AWS support were due to some weird edge case we couldn’t fix but AWS had to. And having a rep involved made it so much smoother.
if you need to bump a quota above the predetermined range of what googlers think is "normal" usage (which is far too low to run anything at scale)you have to talk to a human to negotiate the quota bump. why? because googlers in their infinite engineering wisdom use "gcp quotas" not as a cost optimization guardrail for customers benefit, but to inform google on when and how much metal they need to buy for their datacenter region you are running in.
I have to defend the Googlers here (I work at a different hyperscaler). Teams / services need to optimize their COGS. That means optimizing infrastructure cost. A lot of pay as you go service may not have any base cost to customers but they require some infrastructure to be provisioned. Without quotas you can have a lot of provisioned infrastructure which does not produce any revenue to even collectively break even. Just yesterday this a decision we evaluated again in my team. As a team we cannot afford an unlimited quota - both because of what that would do to our bottom line and because we can't necessarily obtain all the quotas we need ourselves to provision enough capacity for our dependencies. It's a difficult trade off requiring manual intervention.
i may have not emphasized enough how important quotas are for customers. quotas are very important guardrails for orgs that ensure that newly hired engineer who wants to "test drive the cloud" by running a BigQuery tutorial they found on github, gets stopped before they burn $10k in an afternoon. however, quotas on gcp are there for the benefits of google and not geared towards the customers. first there is an ever expanding tree of potential quotas complicating production rollouts of infra and second they are all set insanely low so even the smallest POC gets blocked. requesting a small increase routes the quota through software and auto-approval, requesting a quota that allows for a production workload? 3 weeks + help from your account rep, if google has blessed you the privilege of being allowed to talk with a human googler. no account rep you say, well your production workload can just wait around for google support to potentially acknowledge your existence.
One day, you will need support and when you do, you will realise why every week there's a top voted post on HN on someone complaining about not reaching Google Support
Learning git internals was definitely the moment it became clear to me how efficient and smart git is.
And this way of versionning can be reused in other fields, as soon as have some kind of graph of data that can be modified independently but read all together then it makes sense.
I'm not sure to get. They say took decades for this particular pollution to reach the Philippines via ocean circulation systems but the images suggest it's coming from rivers. Is it possible that china is hiding something?
You'd see other isotopes if it was something recent. It's hard to hide things like this, see the Ruthenium plume that was detected over Europe in 2017 [0]. Radiation instruments are very sensitive!
By publishing recent studies showing a smoking gun?
UP MSI said the results were consistent with recent Chinese studies linking iodine-129 in the Yellow Sea to decades-old nuclear weapons tests and nuclear fuel reprocessing facilities in Europe, ...
China has a great many reactors, nuclear warheads, acres of low level waste from rare earth processing .. they have as much to hide as the US does, that said there's not a lot to gain here by pretending they don't have potential sources - but isotope fingerprints can be verified and all the byproducts from > 2,000 test explosions and the creation of 10,000+ warheads globally do get mapped and tracked.
Most people say their number one complaint is limited history. But then you offer that, and they realize it was not such a big deal. Slack still wins on so many levels that I don't see anyone willing to move any time soon.
There is something in the air regarding dark mode, lots of people are starting to admit dark mode UI are often harder to read, harder to build, or not worth the effort. Maybe we are past peak dark-mode
A well designed dark mode UI is just as readable as a well designed light mode UI. The issue is a lot of designers design light mode then just try to invert for dark mode rather than actually designing for dark mode. I'd imagine your post would exist for light mode if we had started with dark mode as the default.
A lot of software is dark-mode first but it's still not right. Good dark schemes are just really hard to design, there are just too many nuanced differences. Color perception is maybe 10% of it. Typographics, line thickness, optical balance, accounting for massively increased contrast, antialiasing, layout, picture rendering, absolutely everything should be done differently on dark backgrounds.
And it depends too much on your environment, the type of display, and its pixel density, unlike in light mode which is way more forgiving to external factors.
I think a factor is that when a significant design innovation appears, it has to be reasonably usable to get traction. But changes to an existing paradigm just have to be distinctive. Hence light-mode gets lighter and lighter 'till misery/unusability, dark mode then get something-distinctive until also unusable and people go searching for third way.
Kind of a particular instance of enshittification.
reply