No one accused them of being competent negotiators. Remember, the secret behind the "Art of the Deal" is to be obstinate and abusive until everyone settles just to stop dealing with you.
> New languages and technology will be derivatives of existing tech.
This has always been true.
> There will be no React successor.
No one needs one, but you can have one by just asking the AI to write it if that's what we need.
> There will never be a browser that can run something other than JS.
Why not, just tell the AI to make it.
> And the reason for that is because in 20 years the new engineers will not know how to code anymore.
They may not need to know how to code but they should still be taught how to read and write in constructed languages like programming languages. Maybe in the future we don't use these things to write programs but if you think we're going to go the rest of history with just natural languages and leave all the precision to the AI, revisit why programming languages exist in the first place.
Somehow we have to communicate precise ideas between each other and the LLM, and constructed languages are a crucial part of how we do that. If we go back to a time before we invented these very useful things, we'll be talking past one another all day long. The LLM having the ability to write code doesn't change that we have to understand it; we just have one more entity that has to be considered in the context of writing code. e.g. sometimes the only way to get the LLM to write certain code is to feed it other code, no amount of natural language prompting will get there.
> Maybe in the future we don't use these things to write programs but if you think we're going to go the rest of history with just natural languages and leave all the precision to the AI, revisit why programming languages exist in the first place.
> The LLM having the ability to write code doesn't change that we have to understand it; we just have one more entity that has to be considered in the context of writing code. e.g. sometimes the only way to get the LLM to write certain code is to feed it other code, no amount of natural language prompting will get there.
You don't exactly need to use PLs to clarify an ambiguous requirement, you can just use a restricted unambiguous subset of natural language, like what you should do when discussing or elaborating something with your coworker.
Indeed, like terms & conditions pages, which people always skip because they're written in a "legal language", using a restricted unambiguous subset of natural language to describe something is always much more verbose and unwieldy compared to "incomprehensible" mathematical notation & PLs, but it's not impossible to do so.
With that said, the previous paragraph will work if you're delegating to a competent coworker. It should work on "AGI" too if it exists. However, I don't think it will work reliably in present-day LLMs.
> You don't exactly need to use PLs to clarify an ambiguous requirement
I agree, I guess what I'm trying to say is that the only reason we've called constructed languages "programming languages" for so long is because they've primarily been used to write programs. But I don't think that means we'll be turning to unambiguous natural languages because what we've found from a UX standpoint it's actually better for constructed languages to be less like natural languages, than to be covert natural languages because it sets expectations appropriately.
> you can just use a restricted unambiguous subset of natural language, like what you should do when discussing or elaborating something with your coworker.
We’ve tried that and it sucks. COBOL and descendants also never gained traction for the same reasons. In fact proximity to a natural language is not important to making a constructed languages good at what they're for. As you note, often the things you want to say in a constructed language are too awkward or verbose to say in natural language-ish languages.
> terms & conditions pages, which people always skip because they're written in a "legal language"
Legalese is not unambiguous though, otherwise we wouldn’t need courts -- cases could be decided with compilers.
> using a restricted unambiguous subset of natural language to describe something is always much more verbose and unwieldy compared to "incomprehensible" mathematical notation & PLs, but it's not impossible to do so.
When there is a cost per token then it become very important to say everything you need to in as few tokens as possible -- just because it's possible doesn't mean it's economical. This points at a mixture of natural language interspersed code and math and diagrams, so people will still need to read and write these things.
Moreover, we know that there's little you can do to prevent writing bugs entirely, so the more you have to say, the more changes you have to say wrong things (i.e. all else equal, higher LOC means more bugs).
Maybe the LLM can write a lower rate of bugs compared to human but it's not writing bug-free code, and the volume of code it writes is astronomical so the absolute number of bugs written is probably also enormous as well. Natural language has very low information density, that means more to say the same, more cost to store and transmit, more surface area to bug check and rot. We should prefer to write denser code in the future for these reasons. I don't think that means we'll be reading/writing 0 code.
I mean... I dunno I wish the AI could write my papers. I ask it to and it's just bad. The research models return research that doesn't look anything like the research I do on my own -- half of it is wrong, the rest is shallow, and it's hardly comprehensive despite having access to everything (it will fail to find things unless you specifically prompt for them, and even then if the signal is too low it'll be wrong about it). So I can't even trust it to do something as simple as a literature review.
Insofar as most research is awful, it's true that the AI is producing research that looks and sounds like most of it out there today. But common-case research is not what propels society forward. If we try to automate research with the mediocrity machine, we'll just get mediocre research.
Computer science and coding are as related as physics and writing. If your thesis is the LLM can replace all of science then you have more faith in them than I do. If anything the LLM accelerates computer science and frees it from the perception that it is coding.
Yup, I have always viewed corporations as a kind of artificial intelligences -- they certainly don't think and behave like human intelligences, at least not healthy well-adjusted humans. If corporations were humans I feel they would have a personality disorder like psychopathy, and I'm starting to feel the same way about AI.
> Anyway, I guess there is a growing collective admission that the climate is changing even if the crotchety ones will still quip about it not being warmer at some given time and place. It's unfortunate that "global warming" caught on instead of "climate change."
Rather than a collective admission, I feel what's happening is the crotchety people are dying off, leaving us millennials and GenZ to clean up the mess. Thinks won't change until a critical mass of them are dead and gone.
You’re downvoted, but the demographics on this are clear. The younger you are, the more likely you are to take seriously the dire message that the scientific method has trivially produced.
> I think the vibe coded show HN projects are overall pretty boring.
Agreed. r/ProgrammingLanguages had to deal with this recently in the same way HN has to; people were submitting these obviously vibecoded languages there that barely did anything, just a deluge of "make me a language that does X", where it doesn't actually do X or embody any of the properties that were prompted.
One thing that was pointed out was "More often than not the author also doesn't engage with the community at all, instead they just share their project across a wide range of subreddits." I think HN is another destination for those kinds of AI slop projects -- I'm sure you could find every banned language posted on that forum posted here.
Their solution was to write a new rule and just ban them outright. Things have been going much better since.
r/macapps is also putting new requirements in place [1], requiring among other things a statement of what your app solves, how it's better than existing solutions and even a changelog/roadmap.
I would bet just the first two text fields would be enough to catch out vibecoders.
Honestly I thought everyone was clear how this was going to go after the initial decapitation from 2016, but it seems like everyone's gonna allow these science experiments to keep causing damage until someone actually regulates them with teeth.
reply