Fast, battle tested, vue2-like approach, great documentation, good community. The automatic indipendent-scalability as an option is usually the main selling point of these solutions, but honestly I think the real pro is the "composition" approach, which is essential if you want to keep a clean and well-organized codebase. On this regard, I found moleculer pretty great even for large teams.
Intl is the ECMAScript Internationalization API (ECMA-402). Browsers are not the only JS implementations that include it. Node.js does, Deno does, probably others do too.
I'm working with microservices all the time. Even when working solo. It's easy to focus on a single domain while developing a single part of the system, just like coding modules or classes.
That said, I'm always wondering why we are conflating so many things when talking about microservices: deployment and scalability, multi-repo and code structure, integration tests and distributed transactions, and so on. I mean: you can definitely build your application as a monolithic process with a microservices' architecture. It's just separation of concerns all the way down and a proper abstraction for the communication channel between the "modules". You don't need to embed your deployment structure in your repos and your code. These are "framework" problems. Just use one that abstracts all the deployment details until it's time to push the code to prod, and you have to split your code in different containers. This is why I'm now settled on Moleculer (https://moleculer.services/). It just works.
Well, it's all about the data responsibility: who is the owner of the data, how others can access the data. Once you have defined these, you see that you can "share the access" with other microservices (for example read only mode on a view), as long as the ownership and the access rules are preserved.
Yes, a view would be exactly how I would address the problem I described in my other answer above ("What happens if I need to change zip code representation in a data source that is read by multiple microservices?").
But this also means that we are now back into "YesSQL" territory, and specifically that we have to use a RDBMS which allows us to create Views.
Goodbye NoSQL, goodbye Key+Valus datastore. (Or maybe you will just create an extra "newZipCode" and mantain it in parallel with "ZipCode" allowing every other consumer to adapt at their leisure...?).
So it is another step back to "more traditional ways" to design a system... or a recipe for a disaster as soon as you start dealing with significant quantities of "state".
Clarify the microservice architecture concept with "how you are going to deploy your system", as per your example, is exactly what I'm trying to explain to my teams since the microservice architecture inception. There are too many concepts conflating into the "microservice" term: code architecture(separation of concerns), source code organization, deployment, etc. This is very confusing, which is the reason why it's now common to say that microservices are "hard".
It's not subjective. At least not in the way you are wording it. Bitcoins algorithm has no purpose. It's burning power to mine a resource that has no inherent value.
Gaming has obvious value. Entertainment in all forms is necessary for people to feel happy.
It's really not that hard-to-read or to-write. I have instructed dozens of young developers on that with almost no-supervision and no-issue. It's really pretty straightforward.
Also, I can replicate my validations at many levels down the stack: from the client to API-GW to the database model. All with a single definition.
Great software. We are managing terabytes of stocks data and realtime market scanners queries across all market (billions of books and timesales) with hundreds of concurrent requests.
We were using kdb before, but clickhouse is more scalable, way cheaper and much more easy to grasp for a newbie.
I know it's been a few days but what was the driver behind this question? My company uses CH and has near-real-time data being added (~5min delay from live), is this a problem and/or is there something about CH that doesn't play nice with this method? Or were you just interested in this particular case of inserting real time data?
Actually not, but we are ingesting a whole day of EOD market data (thousands of symbols, millions of timesales, billions of books events) in less than a couple of hours. This is 4x faster than the pace of the realtime market (not counting the extended hours due to the limited volume).
Years of daily intraday market data. We have an ingestion engine written in c# and a nodejs backend to run the queries (with an in-house driver built on top of the HTTP APIs)
After "seapiracy" I don't really think we are addressing the biggest issue here. I would like to know if there's anything like that for fish. I would buy anything that tastes like tuna but it's fully vegetable, or any other fishes. Just give me the same taste on my sushi, please.
Yes, it exists for many year. Ask you local vegan community (as product in this field are very different from place to place). We have several vegan sushi places, and the omnis that I know like the food there very much.
Hey, thanks. I'm aware there are so-called alternatives (Banana Blossom, Tofish and such). Unfortunately, from my experience they don't taste like fish yet.
Fast, battle tested, vue2-like approach, great documentation, good community. The automatic indipendent-scalability as an option is usually the main selling point of these solutions, but honestly I think the real pro is the "composition" approach, which is essential if you want to keep a clean and well-organized codebase. On this regard, I found moleculer pretty great even for large teams.