"Lua (or any other JIT-compiled scripting language for that matter). That's a standard choice, but it turns out that it's really hard to sandbox it."
This is a sign the author didn't even try it properly. Lua is one of the easiest languages to sandbox. You can choose not to load the dangerous libraries into the environment in the first place, or you can load them and set up a different global environment table that you specify only for any untrusted code (not by prepending text to the untrusted lua code). The only thing you really need to do is to never accept untrusted bytecode and always load untrusted scripts by compiling lua source code, so you can be sure the bytecode is valid. Or you could even very easily spawn a lua state for each separate untrusted lua code.
When I want a scripting language that's easily sandboxable, lua is the first one I reach for.
"Lua is a highly dynamic language that knows nothing about c pointers"
That's why you have both lightuserdata and userdata. Where you can set up a metatable from the C side, which can't be overridden from the lua side. It was honestly one of the easiest languages to integrate (try embedding python, and then tell me how much hair you have left).
And if you use luajit, you also get niceties like native integer support in lua (before lua 5.3, but with sane bitshifting), borderline frictionless FFI (in most cases, just providing the header file is enough). And by frictionless I mean you can literally manipulate c structs directly from luajit, without having to write any translation code (again, luajit parses the header itself). LuaJIT literally satisfies all of the design goals.
You're confusing those cartridges when fired from a rifle length barrel vs. a handgun length barrel. A 1.5 inch barrel on some pocket carried revolver is not going to send 22 LR at anywhere near the speed of a 16 inch barrel
"whoops, we accidentally made our AI super obsessed with goblins" doesn't really sound like they (or anyone else, for that matter) is really in any form of actual control of them. Their fix seems to be to ask it not to in the prompt
we also don't know how your (or any) really brain works. But we know it does, otherwise you wouldn't have been able to write this comment. So should we just shut your brain down?
We should shut yours down since your comment reads like nothing more than a veiled ad hominem attack. If you disagree with this person, fine, but at the very least try to have a constructive back and forth without resorting to name calling.
I didn't call anyone anything. I was just pointing out that we accept lots of things working without understanding why. Biggest example being our very selves.
How you interpreted it as an ad hominem attack, I have no idea.
all abstractions are leaky, but you can always rewrite them in terms of a lower level and preserve semantics 100%. It'll be more verbose, it'll look ugly, it's not convenient to work with, but that's what compilers do. They lower the abstraction level, and emit assembly or machine code. Which could also be theoretically done by a human with the same level of reliability, given infinite time. The abstraction is deterministic. You can know exactly what it is abstracting.
Not so with LLMs. I can give you an english prompt, and several different LLMs the same prompt and you would all understand it differently. There isn't a way to move between abstraction levels.
LLMs can be deterministic, only the current implementations aren't.
I agree that current LLMs are a bad abstraction, and any non-superintelligent AI is worse than a mechanical abstraction in important ways (it's much more complicated, which makes it harder to debug, impossible to prove correct, creates unnecessary coupling, etc.)
But it's still technically an abstraction. A project's source (or part of it, e.g. a single function) could be a sequence of prompts, and even a non-deterministic LLM, if it's good enough, will always output correct code.
If I give someone a piece of code, they can prove things about the code, even if it is at a higher level of abstraction.
A prompt isn't an abstraction. Say you evaluate an llm completely deterministically. Fixed seed for the sampling, or maybe zero temperature. You give the prompt to an engineer and ask them what it does. He can't really say for sure. 'Try "compiling it". It doesn't work? Well try adjusting your prompt a bit. How? I have no idea; try renaming some variable to shirley or something see what pops out.'
because it is not actually trying to prove anything about its outputs, setting the temperature to zero will just ensure it always makes the same mistake when "compiling" english into code.
A compiler simply always preserves semantics of its input. Even when randomness is used (ex. possibly during register allocation).
No, “it’s” adaptive and if you’re not adaptive then you’re quite literally not doing “it”.
Adaptive methods aren’t something unique to Agile, it’s an aspect found in basic business methodologies and processes. Very basic, textbook stuff. So when software types start grumping about their dysfunctional organizations and blaming methods they aren’t actually applying, it isn’t an indictment of the method and never can be.
If “Adaptive Heat Cycle 3.5” is a process where we turn down the thermostat when we’re too hot, and up when we’re too cold, based on a vote every 20 min: a bunch of sweaty people who are not voting and not changing the temp and lying about their needs because their boss sucks are not using the process. The fact they claim they are is only further proof of dysfunction and incompetence.
Agile has a built in solution to all agile complaints: the agile process where you fix the problem. No fix? Not agile. Blame the cargo cult players, not the rules.
And if a tool is that difficult to use, how can you tell if the problem is in the tool or the user? There's a large industry built around doing training and certifications in agile methodologies now. If a tool is that difficult to get right, maybe it's just not a good tool to begin with.
To be fair, the manifesto and methodology is quite good in theory. But I just have never heard of(or experienced) it working properly and the response is always that it wasn't implemented correctly.
So the widespread existence of business programs, certification and training heavy, obviously proves every project and business methodology is “bad” and the problem is the tool of “business methodologies”?
PRINCE2, for example, is constantly fumbled and misunderstood by immature juniors. They don’t get it, and screw it up. So… what? Haphazard planning and last minute project detonations must replace any effort to avoid such outcomes?
It’s chicken and egg. You have screwups who can’t manage and think wrong, so you formalize rules so dummies can’t hurt leadership, and then you have to train people. A stunning number fail to ‘get it’, suck at management, and do what they feel with justifications instead of following the book. That’s standard distribution at play.
Blaming methods for basic management failures is a management and culture failure. “I’ve never seen [agile] implemented correctly” is saying you didn’t fix communication issues. That’s fine, that’s hard. But that’s a meatspace issue, not process.
I am not sure how you jumped from what I said to this. I don't believe I claimed that every project and business methodology is bad. I can only speak from my experience and am not confident enough to say how every project and business methodology should or shouldn't work.
I do believe you are helping to make my point though. I am saying that the process may very well be perfection but if entities within "meatspace" cannot use it well and may never be able to use it well then how useful really is it.
That the methodology is established and is correct for the project and business is what a manager should be doing - at best an industry established process should make that easier, but it can't remove all the work.
Ensuring the methodology survives contact with the "meatspace" is what a leader should be doing - and even if the process is perfect for the project and business this can still be a lot of work.
This is a sign the author didn't even try it properly. Lua is one of the easiest languages to sandbox. You can choose not to load the dangerous libraries into the environment in the first place, or you can load them and set up a different global environment table that you specify only for any untrusted code (not by prepending text to the untrusted lua code). The only thing you really need to do is to never accept untrusted bytecode and always load untrusted scripts by compiling lua source code, so you can be sure the bytecode is valid. Or you could even very easily spawn a lua state for each separate untrusted lua code.
When I want a scripting language that's easily sandboxable, lua is the first one I reach for.
"Lua is a highly dynamic language that knows nothing about c pointers"
That's why you have both lightuserdata and userdata. Where you can set up a metatable from the C side, which can't be overridden from the lua side. It was honestly one of the easiest languages to integrate (try embedding python, and then tell me how much hair you have left).
And if you use luajit, you also get niceties like native integer support in lua (before lua 5.3, but with sane bitshifting), borderline frictionless FFI (in most cases, just providing the header file is enough). And by frictionless I mean you can literally manipulate c structs directly from luajit, without having to write any translation code (again, luajit parses the header itself). LuaJIT literally satisfies all of the design goals.
reply