Spoiler: it's not actually that easy. Compaction, security, sandboxing, planning, custom tools--all this is really hard to get right.
We're about to launch an SDK that gives devs all these building blocks, specifically oriented around software agents. Would love feedback if anyone wants to look: https://github.com/OpenHands/software-agent-sdk
How autonomous/controllable are the agents with this SDK?
When I build an agent my standard is Cursor, which updates the UI at every reportable step of the way, and gives you a ton of control opportunities, which I find creates a lot of confidence.
Is this level of detail and control possible with the OpenHands SDK? I’m asking because the last SDK that was simple to get into lacked that kind of control.
If you're looking for open source agents, which can run locally, in Docker, or in the cloud, and which have a consistent track record of acing benchmark scores like SWE-bench, check out https://github.com/All-Hands-AI/OpenHands
We're about to release our Agent SDK (https://github.com/All-Hands-AI/agent-sdk/) which provides devs with all the nuts and bolts you need to define custom prompts, tools, security profiles, and multi-agent interfaces
I’d like to see a “no jerks” license. It’d be MIT by default, but call out specific bad actors as being disallowed from using the software. That way your average corporate user wouldn’t need to consult a lawyer before adopting
Presumably the license would, like practically all open source licenses, be irrevocable. You aren't guaranteed new versions will be issued under the same license (short of a contract saying otherwise, just like every other piece of open source software) but the existing license that did not list you as a jerk can't be revoked...
True, but that's still a risk that adds to the risk of the authors switching the license.
BTW, if the jerk list is tied to the license, if the project had external contributors, they all need to agree to add or remove someone from the list, like any license change…
> BTW, if the jerk list is tied to the license, if the project had external contributors, they all need to agree to add [...] someone from the list, like any license change…
Not if you base this off a license like MIT that allows sublicensing under more restrictive terms (not a lawyer, not legal advice)
This software shall not be used for evil. With the exception of IBM, who, together with their partners and minions, are allowed to use this software for evil.
Here are the prompts I use for my AI environment, though it's changed a bunch since the last snapshot
https://github.com/rbren/personal-ai-devbox
reply