It's clear that the article is mostly talking about the reader's ability to interpret figuratively, regardless of the specific reference. However, I'm not even sure it's a biblical reference, because I think dinosaurs are generally incompatible with the story of Noah's Arc. I'm guessing it's probably more along the lines of some theory of continental movement that was prevalent at the time. Maybe it's just a weird mismash of dinosaurs and Noah's Arc, though?
"If you travelled back in time, the coastline itself would be unrecognisable to modern eyes. In the Jurassic Period, most of what later became Britain was under the sea, apart from Scotland, East Anglia and a series of small islands in the southwest."
> if all the water left, it could be dry like a desert too
This is just a contextual interpretation thing. It's clear that's not what he means because he says it's muddy, so it must be the other thing. Also, it becoming a desert is more extreme, so in that case the writer would probably offer up a more detailed explanation.
Someone saying they vibe coded a thing is like them saying they were hammered when they wrote it. Maybe they did a great job, but probably not; it's definitely cause for concern.
Unnecessary access isn't a solveable problem. In order to restrict permissions to exactly what a program needs, in general, you'd have to define exactly what a program does. In other words, you'd need to rewrite the program with self-enforcing access restrictions.
So, permissions are always going to be more general than what a program actually needs and, therefore, exploitable.
Producing incorrect information is an insidious example of this. We can't simply restrict the program's permissions so that it only yields correct outputs -- we'd need to understand the outputs themselves to make that work. But, then, we're in a situation where we're basing our choices on potentially incorrect and unverified outputs from the program.
I think that's kind of the point though: AI is the sand, but it's the rocks that hold all of the value; the further you get away from using AI the more real value you obtain. Like, a few of the rocks have gold deposits in them, and the sand is just infinitely copious but never holds anything valuable. And you've got a bunch of people running around saying, "Behold my mountains of sand!"
IIRC, you can do git branch -D $(git branch) and git will refuse to delete your current branch. Kind of the lazy way. I never work off of master/main, and usually when I need to look at them I checkout the remote branches instead.
I think, more generally, "push effects to the edges" which includes validation effects like reporting errors or crashing the program. If you, hypothetically, kept all of your runtime data in a big blob, but validated its structure right when you created it, then you could pass around that blob as an opaque representation. You could then later deserialize that blob and use it and everything would still be fine -- you'd just be carrying around the validation as a precondition rather than explicitly creating another representation for it. You could even use phantom types to carry around some of the semantics of your preconditions.
Point being: I think the rule is slightly more general, although this explanation is probably more intuitive.
Systems tend to change over time (and distributed nodes of a system don’t cut over all at once). So what was valid when you serialized it may not be valid when you deserialize it later.
This issue exists with the parsed case, too. If you're using a database to store data, then the lifecycle of that data is in question as soon as it's used outside of a transaction.
We know that external systems provide certain guarantees, and we rely on them and reason about them, but we unfortunately cannot shove all of our reasoning into the type system.
Indeed, under the hood, everything _is_ just a big blob that gets passed around and referenced, and the compiler is also just a system that enforces preconditions about that data.
> As I understood it the trick was effectively to dump the full public API documentation of one of those services into their agent harness and have it build an imitation of that API, as a self-contained Go binary. They could then have it build a simplified UI over the top to help complete the simulation.
This is still the same problem -- just pushed back a layer. Since the generated API is wrong, the QA outcomes will be wrong, too. Also, QAing things is an effective way to ensure that they work _after_ they've been reviewed by an engineer. A QA tester is not going to test for a vulnerability like a SQL injection unless they're guided by engineering judgement which comes from an understanding of the properties of the code under test.
The output is also essentially the definition of a derivative work, so it's probably not legally defensible (not that that's ever been a concern with LLMs).
reply