The framing of tests-as-source-code resonates, but I think it extends further than testing specifically. From my experience building with AI coding tools, I spend increasingly more time reviewing and validating code than writing it. You end up acting like an engineering manager running a team of junior devs: scoping tasks tightly, reviewing output critically, deciding whether what came back meets the requirement. Tests are one expression of that, but so is code review - they're both forms of validation. The broader shift is that the developer's primary output is becoming judgment about correctness rather than the code itself.
Essentially the same thing Elon has been saying for years. Physical AI plays to a real Chinese advantage: manufacturing density. China doesn't just have cheap labor, it has the iteration speed that comes from having chip fabs, robotics assemblers, and end-user factories within the same industrial corridor. Compared to foundation models, the gap in embodied AI narrows fast when the bottleneck shifts from compute to real-world manufacturing.
This is explicitly framing hand-written code as the wrong workflow. That's a significant shift from even six months ago. My sense is this will become more common at companies building on top of APIs and integrations (Zapier's core domain), where the code is more glue than architecture. Whether it scales to systems-level work is a different question. The failure modes of agent-written code are still poorly understood, and "built mitigations" is doing a lot of heavy lifting in that job listing.
I am just linking to the Python version because it’s all on one page. All of the other supported languages are the same - they are all autogenerated from the same definition file by AWS.
Also consider these same APIs are surfaced by the CLI, Terraform, CloudFormation and the AWS CDK.
I’ve been testing writing code and shell scripts against the AWS SDK since 3.5. It helped then, I can mostly one shot it now as long as the APIs were available when it was trained. Now I just have to tell it to “search for the latest documentation” if its a newer API
There's a ton of examples of AWS in GitHub isn't there. You couldn't have picked a better API for an AI to one shot from the literally millions of examples it has.
I mean mapping one crazy API with tons of quirks from one non-software company to another non-software company that's often behind a username/password or some other barrier.
It's impossible for private companies to decide what state actors (especially the US military) want to do with AI.
OAI made a business decision to cooperate with the DoW. And they had to make the "we can't control how customers use it" excuse due to pressure from its employees, peer competitors and the general public.
The "optimization" framing is where self-help tends to go wrong. Tyler Cowen has made a similar point that reading self-help books is often a form of procrastination disguised as productivity, because you're consuming meta-strategies rather than doing the actual work in whatever domain you care about.
PMs in Meta-scale companies vs. startups has always been different, and they are diverging even more as AI gets better.
In startups anything goes. PMs and engs do whatever it takes to ship and scale the business. No one cares who's using AI in what way, as long as they're getting shit done.
In a place like Meta or Amazon, people also get more shit done with AI, but because these teams are huge, well-oiled machines, sudden productivity bumps or norm changes can drop overall productivity.
Totally agree with this post as long as it's limited to large, mature teams
100%. PMs at startups already wear many hats and AI helps them do that even better.
But to this sister comment's point, I do think that the dedicated PM role will vanish and the classic BigCo PM will need to look a lot more like the startup one.
agreed and One thing resonated for me in this article was code reading would be great skill to have. reading a lot and thinking a lot will help drive AI to the destination effectively.
To avoid "AI barging into human conversations unsolicited", you can either stop the AI from barging in, or remove the premise that this is a "human conversation". The latter might be easier.
reply