Hacker Newsnew | past | comments | ask | show | jobs | submit | codebje's commentslogin

Both of those products are out of stock.

ZeroID looks like a good idea to me. Lots there I'll be digging into over time, and related to the use of token exchange for authorising back-end M2M transactions on behalf of a user at the front-end.

As far as I can tell the parent post is talking about discovery for agent-to-agent communications, which is not something I have much interest in myself: it feels very "OpenClaw" to replace stable, deterministic APIs with LLMs.


Yeah I'm leaning deterministic too for most needs, but I do think there's a future for agent to agent communication in more specialized cases. I think an agent having access to proprietary datasets / niche software can produce an interesting output. Say someone wants a drawing in autocad, communicating with a trained agent that has mcp access to these kind of tools seems like it could be beneficial to extend a more generalist agent's capabilities.

This isn't the only thing you'd want to do.

I use containers to isolate agents to just the data I intend for them to read and modify. If I have a data exfiltration event, it'll be limited to what I put into the container plus whatever code run inside the container can reach.

I have limited data in reach of the agent, limited network access for it, and was missing exactly this Vault. I'm relieved not to need to invent (vibe code) it.


That'll stay true for consumer software, because the cost for extra resource usage is not borne by the development house.

That's only true if you consider the process the LLM is undergoing to be a faithful replica of the processes in the brain, right?


Why would that be curious? The network is trained on the linguistic structure, not the "intelligence."

It's a difficult thing to produce a body of text that conveys a particular meaning, even for simple concepts, especially if you're seeking brevity. The editing process is not in the training set, so we're hoping to replicate it simply by looking at the final output.

How effectively do you suppose model training differentiates between low quality verbiage and high quality prose? I think that itself would be a fascinatingly hard problem that, if we could train a machine to do, would deliver plenty of value simply as a classifier.


Regular expressions are definitely enough for turning characters into tokens, after which a simple recursive descent parser is vastly more straightforward to write. Lexing is optional, but generally advised.

Turing completeness is the upper bound of computability, not the lower bound. It's useful mostly for showing that some thing can express the full range of computable problems, or for snarking that some thing is far more complex than it has any right to be.

Total languages omit partiality and non-termination from Turing completeness.

Partiality is IMO irrelevant when it comes to computability. Any partial function (that is, one whose range is not defined over its whole domain) can be expressed as a total function by either constricting the domain or expanding the range. For example, a "pop" operation on a stack is not defined for an empty stack. You can just loop forever if pop() is called on an empty stack. Alternatively, you can require that pop() is given a witness that the stack is non-empty, or you can require that pop() returns either the top-most element of the stack or a value that indicates the stack was empty. Both let you compute the same set of things as the former.

Non-termination is required to be Turing complete, because being Turing complete means being able to compute functions that one cannot reasonably expect to complete before the heat death of the universe. In _practice_ every function terminates when the computing process dies due to some external factor: process runs out of memory ("real" Turing machines have infinite memory!), user runs out of patience, machine runs out of power, universe runs out of stars, that sort of thing, so _in practice_ doing 2^64 iterations before giving up will generally* give you the same outcome as doing an unbounded number of iterations: it'll either terminate, or the process will be killed (here, due to reaching its iteration limit).

On the flip side, giving up non-termination and partiality only gives you increased correctness. If there's one thing we've definitely established in computing, it's that we will readily discard correctness to gain a little extra productivity. Why make a developer implement code to handle reaching an iteration limit when you can just make the user get sick of waiting and kill your app?

* 18 quintillion is a very large number. Have a try. The most trivial recursive function, on my M4 Mac, when convincing clang to be smart enough to turn it into a loop but dumb enough not to elide it altogether, would take a bit shy of 600 years to complete if iterating ULONG_MAX times; I didn't wait for that, if I'm honest with you, I ran it with a much smaller iteration count and multiplied it out.


Consumers of dependencies aren't necessarily - or, I would argue, even typically - eyeballing them. The eye ballers in practice seem to mostly be hackers. Skipping the cooldown doesn't mean you're contributing eyes, it means you're volunteering to help the news of how many victims the attack swept up bigger.

No-one is hurt by having the cooldown. Hackers could choose to also have a cooldown, but must balance the risk of competing groups exploiting vulnerabilities first against the reward of a bigger pool of victims to exploit, and without collusion that still favours early exploits over held ones.


"Consumers of dependencies aren't necessarily - or, I would argue, even typically - eyeballing them."

No, but they are the reason software supply chain companies look into the releases. Cool downs very well shift the priorities and therefore hurt the ones not doing them, or doing shorter periods.


The primary culture around Lean is mathematicians looking to prove mathematics. AFAICT Lean is just about the right power for that.

Agda, OTOH, is IMO the dependently typed language for type theorists, and does weird things, but "unproductive" is applicable only for a somewhat narrow view of productivity.

I don't consider there to be a dependently typed language for writing programs that you just want to run, but I would be delighted to be corrected on that viewpoint!


I think Idris 2 is targeted more to programming than to doing math, no?

Yep. I also think it's the best designed out of any of them. As dependently typed languages have become more and more popular, I find it a bit sad that Idris has stayed relatively obscure.

Since you've clearly looked at this a bit... would you give a sentence or two comparing Indris, F*, and the other lesser known players in this space (languages for both writing and formally verifying programs)? I find it a wide space to explore, and while ecosystem maturity seems like a huge deciding factor right now, I assume there's real and meaningful differences between the languages as well.

Idris is rather unique in that the development flow involves writing out your program as a trellis with "holes", and you work through your codebase interactively filling those holes. It feels like Haskell with an even more powerful type system and a companion expert system to alleviate the burden that the type-system's power entails. You're still basically always in program-land, mentally.

F* (and most other dependently-typed languages, or adjacent ones like Liquid Haskell) has a whole external SMT solver layer that lives outside of the language. Think like if SML modules were even less unified with the core language, and also most of your time was spent in that layer. They're really not fun to try and make complex software with, just because the context-switching required at scale is borderline inhuman.

Lean has a unified proof-system in the language like Idris, but it has much the same grain as the languages with external SMT solvers. You're spending most of your mental time in proofsland, thinking primarily about how to prove what you want to do. That's because with how Lean as a language is set up, you're basically centering all your reasoning around the goal. If there's a problem, you're adjusting the structure of your reasoning, changing your proof strategy, or identifying missing lemmas, etc.

You can kind of think of it as though Idris is "inside out" compared to most of the other dependently typed languages.


It's been a little bit since I looked at Idris, so I'll take a closer look now. The QTT behind it struck me as interesting but I didn't play around with it much. Thanks for the tip!


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: