Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

>then you need a bunch of extra parsing and validation code in every recipient object.

that's not a big deal, when we exchange generic information across networks we parse information all the time, in most use cases that's not an expensive operation. The gain is that this results in proper encapsulation, because the flipside of imposing meaning globally is that your entire codebase is one entangled ball, and as you scale a complex system, that tends to cost you more and more.

In the case of the OP where a program "breaks" and has to be recompiled every time some signature propagates through the entire system that is significant cost. Again if you think of a large scale computer network as an analog to a program, what costs more, parsing an input or rebooting and editing the entire system every time we add a field somewhere to a data structure, most consumers of that data don't care about?

this is how we got micro-services, which are nothing else but ways to introduce late binding and dynamism into static environments.

 help



> when we exchange generic information across networks we parse information all the time

The goal is to do this parsing exactly once, at the system boundary, and thereafter keep the already-parsed data in a box that has "This has already been parsed and we know it's correct" written on the outside, so that nothing internal needs to worry about that again. And the absolute best kind of box is a type, because it's pretty easy to enforce that the parser function is the only piece of code in the entire system that can create a value of that type, and as soon as you do this, that entire class of problems goes away.

This idea is of using types whose instances can only be created by parser functions is known as Parse, Don't Validate, and while it's possible and useful to apply the general idea in a dynamically typed language, you only get the "We know at compile time that this problem cannot exist" guarantee if you use types.


> The goal is to do this parsing exactly once, at the system boundary

You are only parsing once at the system boundary, but under the dynamic model every receiver is its own system boundary. Like the earlier comment pointed out, micro services emerged to provide a way to hack Kay's actor model onto languages that don't offer the dynamicism natively. Yes, you are only parsing once in each service, but ultimately you are still parsing many times when you look at the entire program as a whole. "Parse, don't validate" doesn't really change anything.


> but under the dynamic model every receiver is its own system boundary

I'm not claiming that it can't be done that way, I'm claiming that it's better not to do it that way.

You could achieve security by hiring a separate guard to stand outside each room in your office building, but it's cheaper and just as secure to hire a single guard to stand outside the entrance to the building.

>micro services emerged to provide a way to hack Kay's actor model onto languages that don't offer the dynamicism natively

I think microservices emerged for a different reason: to make more efficient use of hardware at scale. (A monolith that does everything is in every way easier to work with.) One downside of microservices is the much-increased system boundary size they imply -- this hole in the type system forces a lot more parsing and makes it harder to reason about the effects of local changes.


> I think microservices emerged for a different reason: to make more efficient use of hardware at scale.

Same thing, no? That is exactly was what Kay was talking about. That was his vision: Infinite nodes all interconnected, sending messages to each other. That is why Smalltalk was designed the way it was. While the mainstream Smalltalk implementations got stuck in a single image model, Kay and others did try working on projects to carry the vision forward. Erlang had some success with the same essential concept.

> I'm claiming that it's better not to do it that way.

Is it fundamentally better, or is it only better because the alternative was never fully realized? For something of modern relevance, take LLMs. In your model, you have to have the hardware to run the LLM on your local machine, which for a frontier model is quite the ask. Or you can write all kinds of crazy, convoluted code to pass the work off to another machine. In Kay's world, being able to access an LLM on another machine is a feature built right into the language. Code running on another machine is the same as code running on your own machine.

I'm reminded of what you said about "Parse, don't validate" types. Like you alluded to, you can write all kinds of tests to essentially validate the same properties as the type system, but when the language gives you a type system you get all that for free, which you saw as a benefit. But now it seems you are suggesting it is actually better for the compiler to do very little and that it is best to write your own code to deal with all the things you need.


> I think microservices emerged for a different reason: to make more efficient use of hardware at scale.

Scaling different areas of an application is one thing. Being able to use different technology choices for different areas is another, even at low scale. And being able to have teams own individual areas of an application via a reasonably hard boundary is a third.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: