Hacker Newsnew | past | comments | ask | show | jobs | submit | tao_oat's commentslogin

This is an interesting idea! I searched around and it looks like there's [ast-grep](https://ast-grep.github.io/), an AST-aware CLI that can search and refactor code -- and you can expose it to your AI agent using a skill (https://github.com/ast-grep/agent-skill).

Not exactly symbolic AI, but pretty cool nonetheless.


And they love to do this in spite of writing "NO FALLBACKS" etc. in your AGENTS.md.

I haven't used them all but based on my partial research so far:

- OpenClaw: the big one, but extremely messy codebase and deployment

- NanoClaw: simple, main selling point is that agents spawn their own containers. Personally I don't see why that's preferable to just running the whole thing in a container for single-user purposes

- IronClaw: focused on security (tools run in a WASM sandbox, some defenses against prompt injection but idk if they're any good)

- PicoClaw: targets low-end machines/Raspberry Pis

- ZeroClaw: Claw But In Rust

- NanoBot: ~4k lines of Python, easy to understand and modify. This is the one I landed on and have been using Claude to tweak as needed for myself


IronClaw’s security architecture sounds plausible, but I have not audited it. Plugins can only access remote endpoints you’ve specifically allowed it for. Secrets aren’t available to the LLM - they are injected where the LLM requires it but only secrets authorized for that plugin are available to it. Together those two things provide an answer to a huge range of the most common prompt injection vulnerabilities, such as credential extraction. So you can give it access to your bank account and email and it can’t email your bank password to an attacker. But it could still transfer money to them.

The only secure way to use any of these tools is to give them very limited access - if they need a credit card give them a virtual card with a low limit, or even its own bank account. They can send email but only from their own account; like a human personal assistant. But of course this requires careful thought and adds friction to every new task, so people won’t be doing it.


Everything supports WA, Telegram, etc. I wish it wasn't so hard to hook up Signal to anything.

I'm using the signal-cli-rest-api but the whole setup feels kinda wonky.


Which would you say has the best cron and heartbeat implementation?

Haven't tried them in enough depth to compare.

Nanobot's was not great (cron + a HEARTBEAT.md meant two ways to do things, which would confuse the AI). But because the implementation is so simple, I could improve it in a few minutes in my own fork!


I'd apply to work for Anthropic in a heartbeat if it was a European company.

I've been trying to get ChatGPT to stop adding this kind of fluff to its responses through custom instructions, but to no avail! It's one of the more frustrating parts of it, IMO.


neat landing page but i don't see how their distribution model would be fundamentally different from / independent of the app stores.


Skills were released in Claude Code, what, yesterday? I doubt there's a simple answer to this -- it'll depend on the model, task, etc.

You could try to get your agent to test its own skills. From https://blog.fsck.com/2025/10/09/superpowers:

> As Claude and I build new skills, one of the things I ask it to do is to "test" the skills on a set of subagents to ensure that the skills were comprehensible, complete, and that the subagents would comply with them. (Claude now thinks of this as TDD for skills and uses its RED/GREEN TDD skill as part of the skill creation skill.)

> The first time we played this game, Claude told me that the subagents had gotten a perfect score. After a bit of prodding, I discovered that Claude was quizzing the subagents like they were on a gameshow. This was less than useful. I asked to switch to realistic scenarios that put pressure on the agents, to better simulate what they might actually do.


I think this is somewhat overhyped. If you look at the video that actually exists of this character[^1], it's clearly AI slop that falls flat -- honestly kind of embarrasing for the studio to put out. This seems like more of a media stunt than anything.

[^1]: https://www.youtube.com/watch?v=3sVO_j4czYs


According to [this page](https://github.com/arkenfox/user.js/wiki/4.1-Extensions#-don...), yes, it's redundant in that case.


Of course, to build a movement, you need a way to get people aware and interested in the first place...!


Tell everyone you know, tell them to do the same...


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: