To play the devil's advocate: AI allows non-tech people to bootstrap a functional application. A global good design may be missing when the code is generated in small quantities and stitched together. It's sloppy in the sense that it cannot be maintained and extended easily but it works™
The ideas underlying the implementation are not necessarily tied to the implementation given the skill gap.
Ok, but how long does it work™ for? I don't know of any "immutable businesses" that are actually successful. Every single business out there from the corner store down the street to the largest megacorps are always changing things, experimenting, and trying to find a competitive edge. So it seems all the AI really did was allow this nontechnical founder to dig themselves and any customers unfortunate enough to trust them very deeply into a hole.
For long running containerised simulations, this saves a lot of time on failures ( as long as you have a safe place to write the snapshots to ) by not restarting from 0 every time.
> His opinion was that the Cessna 185 simply didn’t stall.
There's your problem. Don't opine on operating characteristics of a production aircraft. Read the handbook. This incident was caused by poor airmanship.
It seems like he was asked a question which compelled him to opine. You seem to be assuming that he went ‘out on a limb’ of his own accord, without any basis for that assumption.
No, I did not make that assumption.
His answer to the investigator's question should have been what the operating handbook says and not an opinion he held.
Even he was offering an opinion as addendum (that may have been edited out of context in reporting), it shouldn't be this one.
People draw an arbitrary line of difference in the way we treat AI programs and human outputs.
A human imagining orcs and one horned horses has 'fantastical, larger than life' imagination but AI generation drawing people with strange hands is 'incorrect'.
These are not one to one examples but the point stands that with enough suspension of belief, people are more likely to take on human creations at face value than AI when they know the source.
Sure I think I agree. My point is just you can imagine the post-apocalyptic artist with a human brain painting the beach or just painting the dark landscapes; but we can only imagine the GPT brain painting the dark landscapes, in so far as that is majority of its day to day dataset, thus the statistically likely output.
This suggests a qualitative difference between the two when it comes to "creating" or "generating" that feels far from trivial, even if you want to say the AI can make "good" things, whatever that means to you.
The prompt only asked to "warm up my lunch" without specifying how.
SayCan[1] generated step-wise high level instructions using LLMs for robotic tasks. This takes it a step further by converting high level instructions to low level actions almost entirely autonomously.
Pasting all plain text on the discussion page here at https://news.ycombinator.com/item?id=34470287 as of 22-01-2022 16:00Z renders Hilbert curve prominently while the rest is a Hodge podge of blocks and lines.
Waymo has demonstrated capability (in a limited sense) with full self-driving so I'm excited for this to take off.
What concerns me are the screens. They are going to play ads on a loop. Phone ads to car ads to house ads. We'll get the no billboard demands because your screens are the boards!
The ideas underlying the implementation are not necessarily tied to the implementation given the skill gap.