One topic of debate at my company that I rarely see here is shared libraries vs micro services. Should logic or external calls belong in a shared artifact to be consumed by microservices or hidden away behind an exposed microsercice api?
The truth is that micro services are a tool. However your organization has broadly agreed to utilize the tool is the correct approach.
Having worked with shared library systems, yeah micro or even macro services are far and away more preferable to deal with.
At very least, having clear boundaries on data ownership is something I really wish my company had done from the start. We have legacy structures that are very hard to iterate on because they get loaded by ~20 different applications directly from the database. That means any minor change to that structure requires careful planning and roll out of the shared library to these 20 services.
Were these behind a service, we'd still have to negotiate the structure (we could, for example, delete a field willy nilly) but we'd be able to add new fields to the structure or change the way those fields are populated to meet current business requirements without a mass rollout.
I do think the micro/nano approach can often be wrong. I think a service should be as big as it should be to cover the domain it's dealing with. But, importantly, who owns what bit of information is by large the most important thing to get right.
I think there is also a category of shared library like a color library. It could be a micro service but also could be a library. Since it is easy to have services upgrade to the newest version of it without having cross dependencies that may make more sense as a library.
It can be hard to determine when something should be a service or library and I don't know if I have a clear answer either.
We also have, for example, libraries that purely compute data. However, they have to be consistent across the system and it would actually be pretty helpful if they were a microservice instead as the computations often need to be changed or updated.
Perhaps when it's business logic it should be a service?
I definitely see the need for libraries like compression, IO help, or even just pure math like doing matrix multiplication where older ways to compute aren't necessarily incorrect.
Definitely what I'm saying. But the point about libraries which don't own data is also valid and those can be useful as services in some cases. Particularly when we are talking about something like a set of business rules to follow.
At a bare minimum, if there's some sort of mutable data involved then I think there should be a single service that owns it. But I could also see reasons to create services which don't involve mutable data.
It does somewhat come down to complexity though. Like, for example, I think a "matrix multiplier" service would be silly. Or even just a general "do-math" service. But a service that, for example, takes in raw video and spits out compressed video? Now we are talking about something that probably is better as a service and not a library as you likely want to do things like control encoding values and standards system wide.
Just a little about my background. The systems I worked on, for example, shared user information using the `user-lib` library which contained all the details of how to fetch (and insert, and update) a user from the users table. Many services used that library to pull user information which has been a mess to untangle.
I’m also a long time java spring developer. I started writing a game recently and was really surprised about how bad the performance can be when you run it in a tight game loop.
The startup time is also a real problem, as you really want to be able to scale up pods quickly.
That said, it’s good enough right now. You can make it work at scale, and it’s worth the cost trade off of trying to do it more efficiently in a different language.
I would be curious to see how a rust microservice would compare in my companies infrastructure. How much cloud saving could we squeeze?
Writing high performance Java is definitely a bit of a dark science, a lot of the performance isn't just the code loop you're looking at, but memory allocations matter quite a lot as well. Complex hierarchies of large long lived objects can absolutely tank your performance.
There can also be a lot of performance to be gained by going off heap. I'd be probably be looking at an ECS design around MemorySegments, rather than modelling the game state with Java objects. Though, to be fair, this is how you'd write a game engine in C++ as well.
Spring Boot startup time is indeed a problem, especially when scaling horizontally on low cpu nodes.
If your environment allows for burst cpu usage until ready to accept traffic, you can start up really fast as spring does so much reflection magic during startup that can't be done during compilation "trivially". You can include hints for runtime configuration from a build, but it doesn't do much to help in really low cpu envs.
Then you can of course just do native images, but you lose some of the spring "magic", and might be annoying to refactor towards.
This actually makes me wonder if it is possible to preserve post-startup state and then restore it as a way to mitigate long computational stage during startup. I bet it is, maybe we could just serialize the application context and restore it.
> The startup time is also a real problem, as you really want to be able to scale up pods quickly.
I was learning a bit of spring last week and a spring boot web application, generated via the web interface boots in like 800msec:
...
Initializing Spring embedded WebApplicationContext
Root WebApplicationContext: initialization completed in 339 ms
Tomcat started on port 8080 (http) with context path '/'
Started DemoCourseApplication in 0.746 seconds (process running for 1.012)
...
Reusing my experience from other technologies... I'd say the issue might be in whatever you're doing in your initialization and/or how much stuff you're loading.
Looks like the core spring is decently fast, to me.
Could you expand on "how bad the performance can be" part?
If you are doing graphics, it is entirely more likely that you do something dumb there - there are many pitfalls.
Also, unless you are doing something very CPU-heavy, there won't be any noticeable difference as web servers are doing IO predominantly. Maybe slightly less RAM usage (but you could also just decrease the heapsize to tradeoff a bit of CPU-time for memory, if it were to make sense).
In my game loop I was using optional. When I profiled it the use of optional was one of the slowest points that could be optimized using null checks and ifs.
There were a lot of other slow areas as well, not where you would expect it.
Sure, Optional is not the most optimal thing to use/do, though depending on how many entities you were operating with, that itself may still be negligible.
That's not my experience, though game development is certainly a niche and Java may not be the top choice for that.
You might sometimes have to reach for SoA-like structures and reference them via indices, at least for the core ECS, but for the rest you can easily use bog-standard Java -- not everything has to be "ultra-fast, specially written java", just certain hot loops.
Right. I didn’t started digging into it until I realized the application was clearly sluggish in certain scenarios. As I am doing this for fun I am going to see what kind of speed up I get in rust for this.
Cold latency is an issue with microservices. If you need to use Java, you'll likely end up using frameworks like Spring or Quarkus, which somewhat diminish the advantages of using the JVM and Java as a language.
At that point, you might as well start with Go from the beginning.
Stackoverflow famously ran on a single, although quite beefy server PC for a long time (not any longer, but not for performance reasons AFAIK).
I think it's a good data point to have to scale your workload to stackoverflow's, and reconsider the hardware costs.
(Obviously horizontal scaling has its place, but if it's that variably scalable, maybe there are better solutions, e.g. a single bigger instance -- often times even for cheaper)
Double agree. If Java was the right choice 10-15 years ago for cheap enterprise apps, then certainly NodeJS is the way to go today. There are heaps of cheap developers and the ecosystem is ginormous.
Once you are out of the dating game you may get married and start a family. That also has a mental overhead, and it can be hard to keep your kids schedules aligned with demands / timelines from work in tech.
I think you learn to bear the load more gracefullly as you age and mature. It also puts life more into perspective.
My main point is you learn to handle it, which is actually a good form of personal growth.
The reason people summarize things, often without being correct, is that the world has become too complicated to model. It’s an outcome of the situation, not just a corresponding trend.
> FINRA 2360 provides disclosure and tax requirements to any organization
Nitpick: FINRA rules only apply to FINRA members. For the general versions, you want the SEC rule. (FINRA copies a lot of SEC rules so they can enforce them without the restrictions of a government.)
SG is very small country and yet someone was buying 1/3 of the volume of the US market in Q3.
The DC market in SG has a very low vacancy rate meaning there's very little available space.... i.e. where are all those GPUs going if the market has very little space available?
To play's devils advocate, this could be an Nvidia reporting quirk with all of sales in SE Asia being reported as "Singapore" but even then, the numbers still piqued my interest.
Singapore lifted its moratorium on new DCs in 2022 [1], and since then has been expanding its DC capacity aggressively [2].
You're also right that it could be a reporting quirk.
I don't think we can say definitively that Chinese companies didn't use Singapore as a conduit for purchasing advanced GPUs, I haven't come across any evidence for that. (If anyone knows otherwise, please correct me.)