No, but a lot of AI-adjsuted wordings have the very idiosyncratic AI-style that is prevalent in the AI-slop that is everywhere, and that style has quickly become associated with writing that is generally void of content and insight. So it is natural to get gut-reactions to the typical phrasings that have become associated with AI.
As others have said, this is more of a constraint programming system than Wave Function Collapse. Whatever one wants to call it, I liked it.
For guiding the search, you might want to consider search steps that select only one feature, for example that a pair of adjacent tiles should be connected by a road, and just propagate that information. That could be used as a way to guide the search on high-level features first, and then later realize the plans by doing the normal search.
I have a (very slight) beef with the name Algorithm X, as it is more of a data-structure to manage undo-information for the backtracking than an algorithm. It is a very fun, useful, and interesting data-structure, but it doesn't really change what steps are performed in the backtracking search.
In my view, Scrum is a way to force dysfunctional teams to have some process, it is not useful for a team that is already delivering and working in a samll-a agile manner.
I like the intelligence per watt and intelligence per joule framing in https://arxiv.org/abs/2511.07885 It feels like a very useful measure for thinking about long-term sustainable variants of AI build-outs.
Depends on which cache you mean. The KV Cache gets read on every token generated, but the prompt cache (which is what incurs the cache read cost) is read on conversation starts.
The prompt cache caches KV Cache states based on prefixes of previous prompts and conversations. Now, for a particular coding agent conversation, it might be more involved in how caching works (with cache handles and so on), I'm talking about the general case here. This is a way to avoid repeating the same quadratic cost computing over the prompt. Typically, LLM providers have much lower pricing for reading from this cache than computing again.
Since the prompt cache is (by necessity, this is how LLMs work) prefix of a prompt, if you have repeated API calls in some service, there is a lot of savings possible by organizing queries to have less commonly varying things first, and more varying things later. For example, if you included the current date and time as the first data point in your call, then that would force a recomputation every time.
Yes. The cache that caches KV cache states is called the KV cache. "Prompt cache" is just index from string prefixes into KV cache. It's tiny and has no computational impact. The parent was correct to question you.
The cost of using it comes from the blend of the fact that you need more compute to calculate later tokens and the fact that you have to keep KV cache entries between requests of the same user somewhere while the system processes requests of other users.
Saying that it is just in index from string prefixes into KV Cache misses all the fun, interesting, and complicated parts of it. While technically the size of the prompt-pointers is tiny compared with the data it points into, the massive scale of managing this over all users and requests and routing inside the compute cluster makes it an expensive thing to implement and tune. Also, keeping the prompt cache sufficiently responsive and storing the large KV Caches somewhere costs a lot as well in resources.
Technically, Cerebras solution is really cool. However, I am skeptical that it will be economically useful for models that are larger in size, as the requirements on the number of racks scales with the the size of the model to fit the weights in SRAM.
An LLM model only outputs tokens, so this could be seen as an extension of tool calling where it has trained on the knowledge and use-cases for "tool-calling" itself as a sub-agent.
Sort of. It’s not necessarily a single call. In the general case it would be spinning up a long-running agent with various kinds of configuration — prompts, but also coding environment and which tools are available to it — like subagents in Claude Code.