>"Single, reusable primitives play a disproportionately large aesthetic and practical role in mathematics, engineering, and even biology. Widely known classical examples include the NAND gate (and its dual, Peirce Arrow, logical NOR) for Boolean 0/1 logic [2, 12], the operational amplifier [13] for positive and negative feedback processes, and, more recently, the rectified linear unit (ReLU) ”ramp” activation function [14] in deep learning [15]. We also mention Wolfram’s single axiom [16], K,S combinators from combinatory logic [17, 18],
Interaction Combinators [19], and fuzzy versions of the Sheffer stroke [20]. Other wellknown examples are one-instruction set computers (OISC), e.g. SUBLEQ [21], Conway’s FRACTRAN [22] and the Rule 110 cellular automaton [16, 23]."
>"If AI makes every engineer 50% more productive, you don't get 50% more output. You get 50% more pull requests. 50% more documentation. 50% more design proposals. And someone, somewhere, still has to review all of it.
When two or three early adopters start generating more PRs than before, the team absorbs it. No big deal.
When everyone does it, review becomes the constraint.
The bottleneck doesn't vanish. It moves upstream, to the parts of the job that are irreducibly human: deciding what to build, defining "done," understanding the domain, making judgment calls about risk.
I've written about this pattern before:
the work didn't disappear, it moved.
What's new here is that it moved specifically into verification - and most teams haven't consciously staffed or structured for that yet.
[...]
The question isn't "how do we produce more code?" anymore. The question is "how do we verify more code?" And I don't think most teams have a real answer to that yet."
Excellent article!
It's a great question... how do we verify AI produced code? We could use AI to do that too, but then:
Who verifies the verifier?
Related:
Quis custodiet ipsos custodes? (Alternatively known as: "Who watches the watchmen?" / "Who oversees the overseers?" / "Who manages the managers?" / "Who guards the guardians?" / "Who reviews the reviewers?", etc., etc.):
SNL's Stefon character: "This one has it all... Waffle House, FEMA, breakfast foods, federal emergencies, waffles, emergency preparedness, eggs, teleportation, bacon, black helicopters, hash browns, angry men in combat fatigues talking to God over 2-way radios, George Carlin, grits, syrup for the grits, toast, military communications, orange juice, armageddon/end-of-the-world apocalypse themes, milk, coffee and other breakfast items... all for a very reasonable price!"
Split a "it needs to run in a datacenter because its hardware requirements are so large" AI/LLM across multiple people who each want shared access to that particular model.
Sort of like the Real Estate equivalent of subletting, or splitting a larger space into smaller spaces and subletting each one...
Or, like the Web Host equivalent of splitting a single server into multiple virtual machines for shared hosting by multiple other parties, or what-have-you...
I could definitely see marketplaces similar to this, popping up in the future!
It seems like it should make AI cheaper for everyone... that is, "democratize AI"... in a "more/better/faster/cheaper" way than AI has been democratized to date...
because they may contain shared subterms (also known as "common subexpressions").[1]
Abstract semantic graphs are often used as an intermediate representation by compilers to store the results of performing common subexpression elimination upon abstract syntax trees."
If we look at the history of programming languages, we see the idea of Templating occuring over and over again, in different contexts, i.e., C's macros, C++ Templates, embedding PHP code snippets into an otherwise mostly HTML file, etc., etc.
Templating can involve aspects of meta-code (code about the code), interpretation proxying (which engine/compiler/system/parser/program/subsystem/? is responsible for interpreting a given section of text), etc., etc.
Here we see this idea as another level of proxied/layered abstraction/indirection, in this case between an AI/LLM and the underlying source code...
Is this a good idea?
Will all code be written like this, using this pattern or a similar one, in the future?
I for one don't know (it's too early to tell!) but one thing is for sure, and that's that this new "layer" certainly contains an interesting set of ideas!
I will definitely be watching to see more about how this pattern plays out in future software development...
>"The quacking that catches my ear is when something develops a dependency graph: your package depends on a package that depends on a package, and now you need resolution algorithms, lockfiles, integrity verification, and some way to answer “what am I actually running and how did it get here?”
Several tools that started as plugin systems, CI runners, and chart templating tools have quietly grown transitive dependency trees. Now they walk like a package manager, quack like a package manager, and have all the problems that npm and Cargo and Bundler have spent years learning to manage, though most of them haven’t caught up on the solutions."
reply