If you cannot write "basic syntax" for any language then you are not a programmer, and certainly not a software engineer? This is not a value judgement, it's ok (probably good tbh) to not be a programmer. But you are wasting everyone's time by interviewing for a programming position in this case.
Personally, I forget syntax all the time. There's always a warm up period after I switch languages, and it takes me longer to be start writing good, idiomatic code.
Like sure, I can probably write some python, but will it be pythonic? I might still be Java-minded for a while, trying to OOP my way into solutions.
Earlier today I needed to write some PHP and couldn't remember if it used length, count, or size. I had to look it up. I've been doing this for 20 years.
Same, I can't pass any test that relies on getting syntax correct. If you want me to fizzbuzz on a whiteboard in a language I've been writing dozens or more of lines of per day for a year up to and including the day before, and require that I don't mess up the syntax, I reckon I've got a coin-flip chance of passing at best (meanwhile, sure, of course the actual logic of fizzbuzz isn't tricky for me)
I once got the method invocation syntax wrong for PHP in an interview. I'd written thousands of lines of PHP and had most-recently written some the week before.
This, despite starting off my programming journey in editors with no hinting or automatic correction. If anything, I've gotten even worse about remembering syntax as I've gotten better at the rest of the job, but I was never great at it.
I rely on surrounding code to remind me of syntax and the exact names of basic things constantly. On a blank screen without syntax hints and autocompletion, or a blank whiteboard, I'm guaranteed to look like a moron if you don't let me just write pseudocode.
Been paid to write code for about 25 years. This has never been any amount of a problem on the job but is sometimes a source of stress in interviews and has likely lost me an offer or two (most of the sources of stress in an interview have little to do with the job, really)
> dashboards / metrics to roll up / indicate how well teams and individuals have been doing for a long time
I'm actually a little curious about how long it has been. Bad managers have always prioritized irrelevant metrics, of course, but I have a feeling (backed by no data, just vibes) that management in general crossed a point of no return as soon as "data-driven" became a cross-industry buzzword.
Like, I vaguely remember a time when consumer interactions didn't always come with a request to fill out a survey (with the results getting turned into a number and fed into a dashboard somewhere). And then that changed, and now everything must turned into a number and that number must go up.
"Data driven" essentially means "scalar driven". There is nothing wrong with it if your chosen scalar is a proxy for anything that matters. Of course, usually no one can explain this mapping.
There's actually a good example of this in the rewrite [1], in `PathString::slice`. They are doing an unsafe operation to return a slice that could be a use-after-free, if the caller had not already guaranteed that an invariant will remain true. Following proper rust idiomatic practices, claude has added a SAFETY comment to the unsafe block to explain why it's safe: "caller guarantees the borrowed memory outlives this".
Now, normally, you'd communicate this contract to your API users by marking the type's constructor (PathString::init) as "unsafe", and including the contract in its documentation. Unfortunately in this case, this invariant does not exist - it appears to have been fabricated out of thin air by the LLM [2]. So, not only does this particular codebase have UB problems caused by unsafe code, the SAFETY blocks for the unsafe code are also, well, lies.
`PathString` worked the exact same way in our Zig code, with less visibility from the compiler & type system. And yes, it will be refactored heavily (or deleted overall) in the next week or so.
One potential way to solve this in a principled manner is to turn at least some "unsafe" annotations into ghost capability tokens that are explicitly threaded through the code and consistently checked by the compiler. Manufacturing the capability could itself be left as an unsafe operation, or require a runtime check of some kind.
You already see this in some cases, for example the NonZero<T> generic type can be viewed as a T endowed with a capability or token that just says "this particular value of type T is nonzero, so the zero value is available for niche purposes". But this could be expanded a lot, especially with some AI assistance.
This already happens all the time in rust, including in the standard library. The typical pattern is to define your CheckedType to be
pub struct CheckedType(UncheckedType);
e.g. where its inner field is private. Then, you only present safe constructors that check your invariant, and only provide methods that maintain the invariant.
For a concrete example, String in rust is a Vec<u8> with the guarantee that the underlying bytes correspond to valid UTF8. Concretely, it is defined as
You can construct a string from a vec of bytes via
fn from_utf8(vec: Vec<u8>) -> Result<String, _>;
as well as the unsafe method
unsafe fun from_utf8_unchecked(vec: Vec<u8>) -> String;
Note here that there isn't a separate capability/token though. This is typically viewed as bad practice in rust, as you can always ignore checking a capability/token. See for example rust's mutexes Mutex<T>, which carry the data (T) that you want access to themself. So, to get access to the data, you must call .lock(). There is a similar philosophy behind Rust's `Result` type. to get data underlying it, you must handle the possibility of an error somehow (which can include panicing upon detecting the error of course).
The entire point of unsafe blocks and SAFETY comments is that they are easy for humans to find and audit, but not compiler checkable. If it can be compiler-checked by some clever token system, then ... it's just plain safe rust, and you don't need to document any special safety invariants in the first place
even when you can review the code, it's good to have the compiler check for you. This is for similar reasons why it's better to have CI check correctness on each code change, vs testing the code thoroughly one time, and then being careful going forward.
The maddening thing is that there's a right way to do this if you have the patience and professionalism to do so. It requires building a bit of scaffolding (feature flags, cross-language calling support, harnesses for shadow testing, etc.), then you ship-of-theseus the codebase incrementally. This is not even incompatible with LLM-assistance, plus it breaks the thing up into smaller, reviewable changes that don't break your diff tool!
However, doing it the right way takes a bit more time, involves community feedback, and doesn't produce headlines about huge codebases being rewritten by LLMs in just a few days, so ...
> you can always claim you would have used even more caution and process.
Well, specifically, my claim is that any serious professional in this industry would have done so. But we're essentially in agreement, in the sense that yes, I am allowed to make this claim, and in fact already did, in the comment you are replying to.
EDIT: Actually I've been thinking about this a bit more. The thing about commenting on something that someone did is that you must always comment on it after they did it, otherwise it wasn't "something they did." However, being a "Monday morning quarterback", as I understand it in this context, means "criticism of someone's actions afterwards", so it would appear that I am doing that. I also understand this phrase to have a negative connotation, and I would hate to connote negatively in this otherwise very positive community. Quite a dilemma! Glad I have my life coach LLM to help me sort all this out.
It's naive to think that only one set of trade-offs is the best one, because you can always argue for infinite process and caution.
If this Rust rewrite goes relatively smoothly, you are completely wrong about the balance of trade-offs, but you probably won't admit that because the person advocating for more process sees themself in a zero-risk win-win position:
A. The subject fails, thus you win because they should have used more process and caution.
B. The subject succeeds without more process and caution, but they should have were they a professional like you.
I see this kind of thing in the comments on social media if, idk, someone died on a hike. Psh, that's why I never walk anywhere without a week's worth of water, not even to TJ Maxx. Psh, they should have had a satellite phone; I always have one on me just in case. Psh, their satellite phone broke and they didn't have a backup one? Always carry two.
Funnily enough, your claim is worse than those examples because, unlike them, you don't even know if the rewrite failed yet. The Redditors at least waited for the person to die on the hike before they chimed in with riskless feedback from afar.
Yes, dear god, I get it, please stop repeating yourself. Besides starting right out of the gate with personal attacks, you've found 20 ways different to say "it's not fair to criticize something another person did because you're not taking the same risks as them".
This is a poor syllogism - the second clause does not follow from the first - and worse, it's extremely uninteresting. If you had a good argument to make about the actual topic at hand, you would have made it, but I guess you don't, since you've resorted to criticizing the concept of criticism itself. I will admit that it was dumb of me to engage with this in the first place (although I guess you didn't clock that I was making fun of you? which, frankly, tracks).
As a serious professional in the industry - we're dinosaurs. Nobody cares anymore.
The kids are running the show and are making billions with stuff that doesn't work. But it makes money so nobody cares.
This is not a new phenomenon, it started years ago and really took off when JS became the new hotness. You could see it happening live, right here on HN. But the blast radius is massively increased now with AI and people are getting hurt. It's not funny.
The ship has sailed on rigor.
The sad thing is that this is not going to get better. The best we can hope for is slight improvements to agentic "engineering" practice with lots and lots of blog posts on HN written about how they are rediscovering basic engineering practices.
We (the dinosaurs) will roll our eyes while making a fraction of the money the kids are making.
And even if the whole AI ecosystem implodes (it won't) that would be a massive recession and certainly wouldn't make the remaining software engineering work more rigorous either.
As the Simpsons put it: "An I out of touch? No, it's the children who are wrong."
I'm not even necessarily describing myself as a serious professional - in many ways I'm adjacent to all this! But there's a contingent of people (and in real life, too, not just on HN) who get very angry at the very concept of professionalism itself.
Ah yes, you are actually describing fish shell's Rust rewrite. They specifically called it The Fish Of Theseus which is of course a reference to the ship of Theseus.
If you cannot write "basic syntax" for any language then you are not a programmer, and certainly not a software engineer? This is not a value judgement, it's ok (probably good tbh) to not be a programmer. But you are wasting everyone's time by interviewing for a programming position in this case.
reply