Hi! Sharing a side project I've been hacking on recently, mainly interested to gather some early feedback and find areas where it doesn't hit the value prop I aim to provide.
Glue is an IDL and toolchain that aims to empower you to have a single source of truth for your data models (i.e., structs) and interfaces (i.e., REST/RPC).
Reasonable question, BTW agents were part of this but I removed them temporarily. There is somewhat agentic behavior in magic-cli which you can take a look at.
My main motivation was a gap in the Rust ecosystem for this, as well as a desire to have reasonable abstractions for model alignment, agents and structured response generation with error correction.
In addition, Ollama is a first-class citizen so local LLMs are supported (it calls the locally hosted APIs which Ollama exposes).
And as a last point, it’s just a fun project to hack on.
If you have suggestions for similar abstractions I missed, please let me know!
I’m not sure it’s a good abstraction if it generates a prompt.
Generating good prompts is a Very Hard Problem, and machine generated ones are almost always worse at it than a hand crafted one.
I think if you’re serious you should look at how you can build these systems so the user can use them with entirely hand crafted prompts.
Look at your library from that perspective; if the “generates prompt” part doesn’t exist, what parts are still left?
For example, imagine an agent sandbox where the agent has a set of “tools” like web, command line, code editor and has to pick between tools and craft structured arguments to invoke the various tools.
Given that a) the prompts have to be handed crafted with tweaks per LLM target, b) the set of tools is entirely configurable by the library user, c) at runtime you can pick the set of available tools and LLM to use… that’s an abstraction worth using.
…but it’s hard.
Some other ideas: Eg; agent back off retry for api outages, agents voting on best solution, agent to check output of another agent library automatically generates a new response if the overseer agent rejects the first response, agent to generate code, library parses, executes code. Agents with different system prompts like “civ5 advisors” that can generate suggestions for solving a problem in different ways, multiple api end points to distribute requests, “high and low” agents where an agent can “ask for help” from a more powerful LLM if it gets stuck (eg. For coding, if the generated code fails too many times).
Thanks. You can take a look at the alignment module (there’s an example but it’s not in the README), it implements the “overseer” concept.
And the prompts are mostly customizable, except for some hard-coded ones.
Woah, the shell features are super similar.
Honestly was not familiar with this project, looks great (and ambitious). I'll try it out. Thanks for the share.
Awesome share! Thank you.
There are definitely similarities, and I love Simon's work.
I guess the extra features are some sophisticated UX (requesting the user to fill out "placeholders" in the response, ability to revise the prompt), the "ask" command and the "search" command.
Will definitely give this a spin.
Glue is an IDL and toolchain that aims to empower you to have a single source of truth for your data models (i.e., structs) and interfaces (i.e., REST/RPC).
https://www.gluelang.dev
The home page has an interactive mini IDE where you can generate code on the fly for multiple targets (compiled the codegen into WASM).
To preempt a few questions, I am writing a blog post to give some motivation for this project and where I think other solutions fall a bit short.
Even if this post is lost in the HN abyss, here's hoping this is useful for someone! Please feel free to raise issues on GH or write them here.
Code is here: https://github.com/guywaldman/glue