LLMs read and write human-code because humans have been reading and writing human-code. The sample size of assembly problems is, in my estimate, too small for LLMs to efficiently read and write it for common use cases.
I liken it to the problem of applying machine learning to hard video games (e.g. Starcraft). When trained to mimic human strategies, it can be extremely effective, but machine learning will not discover broadly effective strategies on a reasonable timescale.
If you convert "human strategies" to "human theory, programming languages, and design patterns", perhaps the point will be clear.
But: could the ouroboric cycle of LLM use decay the common strategies and design patterns we use into inexplicable blobs of assembly? Can LLMs improve at programming if humans do not advance the theory or invent new languages, patterns, etc?
But starcraft training is not through mimicking human strategies - it was pure RL with a reward function shaped around winning, which allows it to emerge non-human and eventually super-human strategies (such as the worker oversaturation).
The current training loop for coding is RL as well - so a departure from human coding patterns is not unexpected (even if departure from human coding structure is unexpected, as that would require development of a new coding language).
> Can you sit down with an unfamiliar domain and develop enough genuine curiosity to get good at it, without a syllabus or a credential dangling in front of you?
Do I have faith that I'll be compensated according to my developed ability?
Looking broadly at the recent past, the correct answer seems "no".
I've known many people who met through games. They offer something similar, in the sense that you can meet new people and learn about them.
The synchronous nature of multiplayer games leaves most of this expression implicit rather than explicit, though, so for some people it doesn't fit the same need. It's a kind of role-play.
I think most people are, for lack of a better metaphor, blood-sucking vampires for honest, explicit, and carefully-crafted communication. People are pleased when I offer it, but they struggle to offer it back, so I learn to not bother. Most relationships degenerate into expressing things better left unsaid, or being entirely superficial.
A case study of myself as an overeager math student:
I used to focus so much on finding "elegant" proofs of things, especially geometric proofs. I'd construct elaborate diagrams to find an intuitive explanation, sometimes disregarding gaps in logic.
Then I gave up, and now I appreciate the brutal pragmatism of using Euler's formula for anything trigonometry-related. It's not a very elegant method, if accounting for the large quantity of rote intermediate work produced, but it's far more effective and straightforward for dealing with messy trig problems.
Agreed. I think the divide is between code-as-thinking and code-as-implementation. Trivial assignments and toy projects and geeking out over implementation details are necessary to learn what code is, and what can be done with it. Otherwise your ideas are too vague to guide AI to an implementation.
Without the clarity that comes from thinking with code, a programmer using AI is the blind leading the blind.
The social aspect of a dialogue is relaxing, but very little improvement is happening. It's like a study group where one (relatively) incompetent student tries to advise another, and then test day comes and they're outperformed by the weirdo that worked alone.
Writing may not be produced for the prestiege of its result, but written words still serve an essential purpose for communication. I think that, as with any essential art, e.g. cooking, people will experiment with it to fit their needs.
Writing is also peculiar in that it is easily referenceable with a deep history, so it serves as a way to compare one's own ideas to others. Memes are similar in principle, but tend towards esotericism and ephemerality in a balkanized internet.
I prefer a more direct formulation of what mathematics is, rather than what it is about.
In that case, mathematics is a demonstration of what is apparent, up to but not including what is directly observable.
This separates it from historical record, which concerns itself with what apparently must have been observed. And it from literal record, since an image of a bird is a direct reproduction of its colors and form.
This separates it from art, which (over-generalizing here) demonstrates what is not apparent. Mathematics is direct; art is indirect.
While science is direct, it operates by a different method. In science, one proposes a hypothesis, compares against observation, and only then determines its worth. Mathematics, on the contrary, is self-contained. The demonstration is the entire point.
3 + 3 = 6 is nothing more than a symbolic demonstration of an apparent principle. And so is the fundamental theorem of calculus, when taken in its relevant context.
I think that humans can find new frontiers to struggle on and develop mental faculties for, even if the prior frontiers are solved.
"Problem-solving" might be dead, but people today seem more skilled in categorizing and comparing things than those in the past (even if they are not particularly good at it yet). Given the quantity and diversity of information and culture that exists, it's necessary. New developments in AI reinforce this with expert-curated data sets.
I have to agree with you. It seems that most measures to make school harder or more rigorous turn it into an aptitude test or boot camp, because so little development can occur in that environment. It breaks down individuals or, at best, filters them.
If that's what schools are supposed to be, so be it, but I'd like to see that outcome explicitly acknowledged (especially by other posters here) instead of implied.
> If a game is good, it’s going to attract cheaters.
I have started to consider that games should be inherently cheat-resistant, not protected by anti-cheats.
Chess and Go are less affected by cheats by their design. It's not nearly as frustrating to lose to a cheater when they're working with the same information you are, and when they perform actions that a human could reasonably perform.
I find that rulesets enforced by nature or by the design of the system are, to me, more interesting than rulesets enforced by agreement and punishment, even if the "agreement" is not to hack the game. It forces more creativity and makes games offer more relevant experiences instead of copying the same formula.
As for identity systems etc. to permaban cheaters, I think that if it takes increasingly strict levels of monitoring and crackdown and reliance on "trusted authorities" to keep these beloved games playable, it might be better to move on and find new games. Few (if any) individual games or genres of games matter enough to warrant this attention.
Cheating might break tournament or social rules, but it doesn't break the game. So yes.
And any online game can be "cheated" by having someone better play in your place, or abusing the ranking system, but again that is breaking a social/meta-game rule, not a game rule.
Cheats in FPS games effectively break the rules of the game (wallhacks), or do things that are entirely impossible for a human (instant-lock aimbot). Chess and Go don't have that problem.
I suppose the difference is moot when everyone imagines that they're in a tournament playing for clout, and not playing to learn strategy.
I liken it to the problem of applying machine learning to hard video games (e.g. Starcraft). When trained to mimic human strategies, it can be extremely effective, but machine learning will not discover broadly effective strategies on a reasonable timescale.
If you convert "human strategies" to "human theory, programming languages, and design patterns", perhaps the point will be clear.
But: could the ouroboric cycle of LLM use decay the common strategies and design patterns we use into inexplicable blobs of assembly? Can LLMs improve at programming if humans do not advance the theory or invent new languages, patterns, etc?
reply