We were building a payments system in the early 2000s and got a diktat to not use Oracle. The amount of things we had to build to satisfy the availability and durability requirements were so huge it consumed the first few years of work. We didn’t get to the business side of things until much later. Funny thing is we ended up giving up on MySQL and went back to oracle after all that work. The whole thing was scraped after a couple of years.
To get to the level of scale that oracle can handle we had to build sharding and cluster replication from scratch. It still didn’t get to even 1/10th of a single oracle node. Obviously we made a lot of poor architecture decisions as well - in hindsight, of course.
Yes, although a lot of the most advanced PostgreSQL features that would bear comparison in this discussion are relatively recent. PostgreSQL didn't have them in the 2000s, either, and where it did, the ergonomics were much worse than they are today.
I use Patroni (https://github.com/patroni/patroni) (no affiliation to me) which is a really nice and reliable PostgreSQL distribution that provides automatic failover and not just active-standby nodes with manual failover.
As I understand it, you would have to script a separate watchdog process for the basic PostgreSQL, to get high availability.
I remember when Salesforce was a true innovator. I attended a Salesforce conference the other day. Sounds like they're all-in on their "agentic" vision. I miss when the Salesforce conferences were tailored to nerds like me and included products that the CEO would never hear about but my team would fight tooth and nail to keep in the budget.
Metal is technically more elastic than an elastic band.
With a Young’s modulus of 69 GPa for aluminum versus just 2 GPa for ABS, metal has the "memory" to snap back from significant pressure. Plastic, true to its name, is far more likely to hit its limit and stay permanently deformed. (That’s why metal bars are used to provide “flexibility” to buildings. Concrete provides the strength)
Texas, technically, generates more TWh than California. I think a data center boom followed by a bust would help a lot more than what California can do. Unlike in cars, CAs market size or regulations can’t help/hinder other fuel sources as much.
Perovskites, compounds with unique electronic properties previously confined to specialist applications, could enable neuromorphic computers that process and store data simultaneously, as human brains do. Halide perovskites can form memristors, circuit components whose electrical resistance switches between high and low states, allowing them to function as both artificial neurons and synapses in brain-like computing systems. Researchers expect to assemble prototype neuromorphic networks using perovskite-based memristors and capacitors within the next year, though commercial success remains uncertain
Distribution agreement is generally different from a sale. Distributors act as agents of the manufacturer. It’s not yet counted as a sale. Most warranties are limited to first owner and do not transfer. How do you think this squares with that? Does it mean I don’t get warranty on the dishwasher I got from Costco? It’s also the same principle of a distributor acting as an agent that enables the manufacturer to have a contract with you.
Funny you would pick this analogy. I feel like we’re back in the mainframe era. A lot of software can’t operate without an internet connection. Even if in practice they execute some of the code on your device, a lot of the data and the heavyweight processing is already happening on the server. Even basic services designed from the ground up to be distributed and local first - like email (“downloading”) - are used in this fashion - like gmail. Maps apps added offline support years after they launched and still cripple the search. Even git has GitHub sitting in the middle and most people don’t or can’t use git any other way. SaaS, Electron, …etc. have brought us back to the mainframe era.
It's always struck me as living in some sort of bizaro world. We now have these super powerful personal computers, both handheld (phones) and laptops (My M4 Pro smokes even some desktop class processors) and yet I use all this powerful compute hardware to...be a dumb terminal to someone else's computer.
I had always hoped we'd do more locally on-device (and with native apps, not running 100 instances of chromium for various electron apps). But, it's hard to extract rent that way I suppose.
I don't even understand why computer and phone manufacturers even try to make their devices faster anymore, since for most computing tasks, the bottleneck is all the data that needs to be transferred to and from the modern version of the mainframe.
There are often activities that do require compute though. My last phone upgrade was so Pokemon Go would work again, my friend upgrades for the latest 4k video or similar.
Yet manufacturers give us thinner and thinner phones every year (instead of using that space for the battery), and make it difficult to swap out batteries which have degraded.
> make it difficult to swap out batteries which have degraded.
That's the part that pisses me off the most. They all claim it's for the IP68, but that's bullshit. There's plenty of devices with removable backs & batteries that are IP68.
My BlackBerry bold 9xxx was 10mm thin. the iPhone 17 Pro Max is 8.75. You aren't going to notice the 1.3mm of difference, and my BlackBerry had a user replaceable battery, no tools required just pop off the back cover.
The BlackBerry was also about 100 grams lighter.
The non-user removable batteries and unibody designs are purely for planned obsolescence, nothing else.
Also when a remote service struggle I can switch to do something else. When a local software struggles it brings my whole device to its knees and I can't do anything.
I think people have been finding more compelling use cases for the fact that information systems can be multi-player now than for marginal FLOPS. Client-server is just a very effective way of organizing multi-player information systems.
To get to the level of scale that oracle can handle we had to build sharding and cluster replication from scratch. It still didn’t get to even 1/10th of a single oracle node. Obviously we made a lot of poor architecture decisions as well - in hindsight, of course.
reply