> It might be that regalloc needs to be taught to rematerialize
It knows how to rematerialize, and has for a long time, but the backend is generally more local/has less visibility than the optimizer. This causes it to struggle to consistently undo bad decisions LICM may have made.
> but the backend is generally more local/has less visibility than the optimizer
I don't really buy that. It's operating on SSA, so it has exactly the same view as LICM in practice (to my knowledge LICM doesn't cross function boundary).
LICM can't possibly know the cost of hoisting. Regalloc does have decent visibility into cost. Hence why this feels like a regalloc remat problem to me
Sure. Any pass that is scoped to functions (or even loops, or basic blocks) will have increased scope if run after inlining, and most passes run after inlining.
In the context of this thread, your observation is not meaningful. The point is: LICM doesn't cross function boundary and neither does regalloc, so LICM has no greater scope than regalloc.
Really? I read the same sentence (as an American) and immediately thought that they must be referring to British English. Certainly nobody says brilliant as an affirmation here.
And "no problem" and "not bad" are both common colloquial statements in American English.
Apple has neglected the iTunes store for years. Yes, you can still buy tracks, but it's really crappy. 1) The catalog is nowhere near as extensive as Apple Music. 2) It's AAC 256kbps format only. Not lossless.
Apple goes along with the enshitification of everything and wants you to rent your music, not own it.
Right? If this is really true, that some random folk without compiler engineering experience, implemented a completely new feature in ocaml compiler by prompting the LLM to produce the code for him, then I think it really is remarkable.
It seems more like a non experienced guy asked the LLM to implement something and the LLM just output what and experienced guy did before, and it even gave him the credit
Copyright notices and signatures in generative AI output are generally a result of the expectation created by the training data that such things exist, and are generally unrelated to how much the output corresponds to any particular piece of training data, and especially to who exactly produced that work.
(It is, of course, exceptionally lazy to leave such things in if you are using the LLM to assist you with a task, and can cause problems of false attribution. Especially in this case where it seems to have just picked a name of one of the maintainers of the project)
Did you take a look at the code? Given your response I figure you did not because if you did you would see that the code was _not_ cloned but genuinely compiled by the LLM.
It’s one thing for you (yes, you, the user using the tool) to generate code you don’t understand for a side project or one off tool. It’s another thing to expect your code to be upstreamed into a large project and let others take on the maintenance burden, not to mention review code you haven’t even reviewed yourself!
Note: I, myself, am guilty of forking projects, adding some simple feature I need with an LLM quickly because I don’t want to take the time to understand the codebase, and using it personally. I don’t attempt to upstream changes like this and waste maintainers’ time until I actually take the time myself to understand the project, the issue, and the solution.
What are you talking about? It was ridiculously useful debugging feature that nobody in their sanity would block because "added maintenance". MR was rejected purely because of political/social reasons.
My grandfather also worked on it, as a technician in Los Alamos.
He had previously been working for a scientific supplies company in Chicago that was (unbeknownst to him) providing supplies to the Manhattan Project. Apparently his boss was aware of it, and when my grandfather's draft was called a letter from his boss convinced the draft board to assign him to Los Alamos instead. He was eventually able to get my grandmother, a secretary and typist, a job as a secretary in Los Alamos as well so that she could join him. She teased him the rest of their lives, because as the secretary to someone more important than a lowly technician, she had technically had a higher security clearance than he ever did!
The Atomic Heritage Foundation collects records about people who were affiliated with the Manhattan Project, as well as oral histories. Perhaps they have more information about your grandfather's work? See here: https://ahf.nuclearmuseum.org/ahf/bios/
Thank you for the link. I tried using their little search table, but nothing returned. One thing that makes matters a bit more difficult was record keeping at the time. My family has some other documents from his life where he apparently went by a few different permutations of his name. That, or mistakes were made when entering records.
I might trying contacting them directly though. Thanks again!
> I tend to agree but, playing devil's advocate, is this true for other roles? Does a movie director need to know how to build sets? How to sew costumes? How to use Blender/Maya/Houdini?
I don't know that much about movie making, but my understanding is that there would be managers and/or leads within each specialty, who are (among other things) managing the interaction between their specialty and the director / producers.
That seems pretty comparable to what's being discussed here.
It knows how to rematerialize, and has for a long time, but the backend is generally more local/has less visibility than the optimizer. This causes it to struggle to consistently undo bad decisions LICM may have made.