Hacker Newsnew | past | comments | ask | show | jobs | submit | guywald's commentslogin

Hi! Sharing a side project I've been hacking on recently, mainly interested to gather some early feedback and find areas where it doesn't hit the value prop I aim to provide.

Glue is an IDL and toolchain that aims to empower you to have a single source of truth for your data models (i.e., structs) and interfaces (i.e., REST/RPC).

https://www.gluelang.dev

The home page has an interactive mini IDE where you can generate code on the fly for multiple targets (compiled the codegen into WASM).

To preempt a few questions, I am writing a blog post to give some motivation for this project and where I think other solutions fall a bit short.

Even if this post is lost in the HN abyss, here's hoping this is useful for someone! Please feel free to raise issues on GH or write them here.

Code is here: https://github.com/guywaldman/glue


Reasonable question, BTW agents were part of this but I removed them temporarily. There is somewhat agentic behavior in magic-cli which you can take a look at.

My main motivation was a gap in the Rust ecosystem for this, as well as a desire to have reasonable abstractions for model alignment, agents and structured response generation with error correction.

In addition, Ollama is a first-class citizen so local LLMs are supported (it calls the locally hosted APIs which Ollama exposes).

And as a last point, it’s just a fun project to hack on. If you have suggestions for similar abstractions I missed, please let me know!


If you want feedback…

I’m not sure it’s a good abstraction if it generates a prompt.

Generating good prompts is a Very Hard Problem, and machine generated ones are almost always worse at it than a hand crafted one.

I think if you’re serious you should look at how you can build these systems so the user can use them with entirely hand crafted prompts.

Look at your library from that perspective; if the “generates prompt” part doesn’t exist, what parts are still left?

For example, imagine an agent sandbox where the agent has a set of “tools” like web, command line, code editor and has to pick between tools and craft structured arguments to invoke the various tools.

Given that a) the prompts have to be handed crafted with tweaks per LLM target, b) the set of tools is entirely configurable by the library user, c) at runtime you can pick the set of available tools and LLM to use… that’s an abstraction worth using.

…but it’s hard.

Some other ideas: Eg; agent back off retry for api outages, agents voting on best solution, agent to check output of another agent library automatically generates a new response if the overseer agent rejects the first response, agent to generate code, library parses, executes code. Agents with different system prompts like “civ5 advisors” that can generate suggestions for solving a problem in different ways, multiple api end points to distribute requests, “high and low” agents where an agent can “ask for help” from a more powerful LLM if it gets stuck (eg. For coding, if the generated code fails too many times).

Not: “literally anything” -> library generates terrible prompt -> returns response from API.


Thanks. You can take a look at the alignment module (there’s an example but it’s not in the README), it implements the “overseer” concept. And the prompts are mostly customizable, except for some hard-coded ones.


Nope, just my ADHD probably :-) Thanks, will go over it more carefully.


Okay! Cargo is misspelled as Caro and below that there is a misplaced `

I would PR this but I dont believe the overhead would be worth it


Someone put out a PR to fix it (was it you by any chance?) so it’s resolved


Nice! I like the `needs` utility :)


It's currently set not to stream (https://github.com/guywaldman/magic-cli/blob/4d4dca034063aa6...). The performance is something I plan to improve.


Woah, the shell features are super similar. Honestly was not familiar with this project, looks great (and ambitious). I'll try it out. Thanks for the share.


Another approach converts into python:

An CLI assistant that responds by generating and auto-executing a Python script. https://github.com/AbanteAI/rawdog


Awesome share! Thank you. There are definitely similarities, and I love Simon's work. I guess the extra features are some sophisticated UX (requesting the user to fill out "placeholders" in the response, ability to revise the prompt), the "ask" command and the "search" command. Will definitely give this a spin.


Yep, this is auto-generated by cargo-dist (https://opensource.axo.dev/cargo-dist/book/)


This is a great question. I added a "Why Rust?" section to the blog post to provide my rationale: https://guywaldman.com/posts/introducing-magic-cli#why-rust


I assume you didn't mean to share a localhost link :)



Woops, fixed the original reply. Thanks. I guess I'm excited that I got all this traction from HN ;)


Would be a great way to tell someone to "fuck off" lol.


Huh. Weird for such a simple "program" if you can even call it that, but I guess I get it. Thanks.


Uh buddy you linked to localhost:3000.


It’s what the LLM told him to do


I am but a mere vessel to my neural network overlords


Working on my machine


Thanks for the heads up, friend.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: