It's interesting to see the various perspectives on heap sizing and how it can affect system performance. The author's suggestion of using a fixed heap size for studying a system and a growable heap for the average user makes sense. However, the idea of an adaptive heap size can also be useful in certain situations.
It's clear that there will always be some level of heuristic badness and a proliferation of knobs and dials to tune for optimal system performance. However, having a general framework for approaching heap sizing can provide structure and limit the number of knobs to a manageable amount.
For me, Forth, with its small memory footprint and ability to manipulate memory directly, can be leveraged to help solve heap sizing issues. Its flexibility in managing memory both interactively and an each function/word can allow for a more efficient use of resources and possibly reduce the need for complex heap sizing algorithms. Well, that is my very biased opinion anyways.
Startup, shutdown and steady state tend to have different memory profiles, and there are at least some conventions that can telegraph what state you’re in.
Forked node processes for instance can send an event when they first call listen(). That doesn’t work for all application architectures, but it would be easy enough for a program to tell when it’s about to shift into a different mode and adjust behaviors.
Cool to see this new garbage collection work from OOPSLA 2023 coming up in blog posts! The blog post doesn't mention that this new algorithm looks like it's going to become the default in Firefox, and is being implemented in Chrome [1]. Also, the paper is open access so click the links if you want more details, it won't be paywalled like some other papers.
Can you give a more detailed explanation about this algorithm?
It seems that it boils down to "Give the process twice the heap and it spends half the time garbage collecting?"
Looking at the graphs, the old garbage collector didn't tend to expand without bound, it clearly had a smaller max size than the new one. In addition, the old GC was pretty near the "knee" of the curve, if you allocate less memory GC time goes up, if you allocate more memory GC time goes down.
I'm seeing that you get about 20% improvement at the center, at the expense of twice the delay or twice the memory at the ends.
"Pebblesorters value creating heaps of pebbles that have a prime number of stones in them, and are horrified by heaps of pebbles with the "wrong" number of stones. We humans can understand the rule behind their moral intuitions (better than they can, even, since they seemingly haven't realized the rule they're following), but even though we understand what's "right and wrong" from their perspective we still find sorting pebbles a pointless waste of time.
Many humans think that any advanced enough artificial intelligence will be moral and good, because it'll be smart enough to understand right from wrong. And an advanced enough AI will understand human morality - maybe better than we do, even, it could perhaps deduce and codify the rules that humanity has been stumbling towards over our millennia of moral progress. But, despite understanding, it won't care any more than understanding pebblesorter morality makes us want to sort pebbles."
Well that makes it less obscure but if you're telling something in the form of a children's story with cartoons, dropping AI as a concept in the middle of it when you were talking about ancient cultures.
If you're going to use analogies, use analogies.
Sometimes you have to drop the cleverness and just talk to people.
It's clear that there will always be some level of heuristic badness and a proliferation of knobs and dials to tune for optimal system performance. However, having a general framework for approaching heap sizing can provide structure and limit the number of knobs to a manageable amount.
For me, Forth, with its small memory footprint and ability to manipulate memory directly, can be leveraged to help solve heap sizing issues. Its flexibility in managing memory both interactively and an each function/word can allow for a more efficient use of resources and possibly reduce the need for complex heap sizing algorithms. Well, that is my very biased opinion anyways.
It was a great read.