Hacker Newsnew | past | comments | ask | show | jobs | submit | webdevladder's commentslogin

Full-stack rich schemas, not the poor lossy JSON Schema or other language-agnostic ones, are so nice for long-term quality and development velocity. You get to categorically avoid bugs that drag teams down. Zod 4, ArkType, and Valibot are all great.


This is the problem inherent in web dev I suspect. JS developers thought they reached the zenith of programming because they had a type system and could realize some gains via strong typing checks applied to what would otherwise be network calls.

However, at a certain point, you're better off not writing a web app anymore, just an app with a somewhat wonky, imprecise runtime, one that lacks any sort of speed and has many drawbacks.

And you lose one of the most fundamentally important parts of the web, interop. I'm sure other langs can be made to speak your particular object dialect, however the same problems that plague those other type systems will still plague yours.

Which circles back to my issue, no, sticking your head in the sand and proclaiming nothing else exists, does not, in fact, make things better.


You've missed the point, it's inherent in any serialized communication, and the gains are far greater than a type system. Protobuf and friends, and every type system in existence, pale in comparison to runtime capabilities and guarantees.


ArkType is a really interesting library that has a difficult time marketing itself. More than being a schema validator, it brings TS types into the runtime, so you can programmatically work with types as data with (near?) full fidelity.

I've been evaluating schema libraries for a better-than-Zod source of truth, and ArkType is where I've been focused. Zod v4 just entered beta[1], and it improves many of my problems with it. For such a mature library to improve like this, v4 is treat and speaks volumes to the quality of engineering. But ArkType has a much larger scope, and feels to me more like a data modeling language than a library. Something I definitely want as a dev!

The main downside I see is that its runtime code size footprint is much larger than Zod. For some frontends this may be acceptable, but it's a real cost that isn't wise to pay in many cases. The good news is with precompilation[2] I think ArkType will come into its own and look more like a language with a compiler, and be suitable for lightweight frontends too.

[1] https://v4.zod.dev/v4

[2] https://github.com/arktypeio/arktype/issues/810


I recently went down this same rabbit hole for backend and stumbled on Typia[0] and Nestia[1] from the same developer. The DX with this is fantastic, especially when combined with Kysely[2] because now it's pure TypeScript end-to-end (no runtime schema artifacts and validations get AOT inlined).

I was so shocked by how good this is that I ended up writing up a small deck (haven't had time to write this into a doc yet): https://docs.google.com/presentation/d/1fToIKvR7dyvQS1AAtp4Y...

Shockingly good (for backend)

[0] Typia: https://typia.io/

[1] Nestia: https://nestia.io/

[2] https://kysely.dev/


An interesting development with Typia is that it will need to be rewritten in Go to work with TypeScript 7. https://github.com/samchon/typia/issues/1534#issuecomment-27...

This is because it relies on patching the TypeScript implementation. I'm curious if its approach is even feasible with Go?


Between this and node adding the --experimental-strip-types option which would otherwise allow people to skip compilation, I'm not sure I would choose Typia right now. I'm sure it's a great library, but these don't bode well for its future.


I think it's fair to be skeptical, but I'm aligned with the overall approach the author took and I think the approach itself is what is interesting (pure TS + AOT).

Author + contributors and ts-patch team[0] seem up for a rewrite in Go based on that thread! Might be bumpy, but a pure TS approach is really appealing. I'm rooting for them :)

[0] https://github.com/nonara/ts-patch/issues/181#issuecomment-2...


I was going to ask about how pure types would fill the gap for other validations in Zod like number min/max ranges, but seeing the tags feature use intersection types for that is really neat. I tried assigning a `string & tags.MinLength<4>` to a `string & tags.MinLength<2>` and it's interesting that it threw an error saying they were incompatible.


That's because "minimum length" cannot be enforced in TypeScript. Maybe you already know this.

I'm not a Typia user myself, but my RPC framework has the same feature, and the MinLength issue you mentioned doesn't crop up if you only use the type tags at the client-server boundary, which is enough in my experience.


Thanks for sharing the deck! I had no idea Typia existed and it looks absolutely amazing. I guess I'll be trying it out this weekend or next :)


The docs have a bit of a rough edge because the author is Korean, but the examples are quite good and took me maybe 2-3 hours to work through.

Once everything clicked (quite shortly in), I was a bit blown away by everything "just working" as pure TypeScript; I can only describe the DX as "smooth" compared to Zod because now it's TypeScript.


Definitely check out Valibot as well, it may be the smaller footprint zod you’re looking for: https://valibot.dev


There's also zod mini now too https://v4.zod.dev/packages/mini


> it brings TS types into the runtime

So...it's a parser. Like Zod or effect schema.

https://effect.website/docs/schema/introduction/


No, it's more like a type reflection system, at least as I understand it. You can use it to parse types, but you can also do a lot more than that.


Could you give an example or two of “more than that”?


Yeah, you can walk the AST of your types at runtime and do arbitrary things with it. For example, we're using ArkTypes as our single source of truth for our data and deriving database schemas from them.

This becomes very nice because ArkType's data model is close to an enriched version of TypeScript's own data model. So it's like having your TypeScript types introspectable and transformable at runtime.


TypeBox is similar by virtue of its goal of its runtime types matching JSON Schema's data model without need for conversion.


You can do whatever you want with the AST in effect schema too, it's a parser with a decoder/encoder architecture:

https://effect.website/docs/schema/transformations/


That's neat, thanks!


> The main downside I see is that its runtime code size footprint is much larger than Zod.

Yes, it unfortunately really does bloat your bundle a lot, which is a big reason I personally chose to go with Valibot instead (it also helps that it's a lot closer to zods API so it's easier to pickup).

Thanks for linking that issue, I'll definitely revisit it if they can get the size down.


Personally, I find Zod’s API extremely intimidating. Anything more resembling TypeScript is way better. ArkType is neat, but ideally we’d have something like:

  export reflect type User = {
    id: number;
    username: string;
    // ...
  };
Edit: just remembered about this one: https://github.com/GoogleFeud/ts-runtime-checks


This is why I like libraries like typia or typebox-codegen; I'd prefer to write TypeScript and generate the validation, rather than write a DSL.


It's not perfect and doesn't cover all of zods functionality (iirc coercion) but I've used https://www.npmjs.com/package/ts-to-zod before to generate zod schemas directly from types.


I believe I was the target of employment-flavored spear phishing a few months ago. Could have been a researcher like the OP.

- 3 new email chains from different sources in a couple weeks, all similar inquiries to see if I was interested in work (I wasn't at the time, and I receive these very rarely)

- escalating specificity, all referencing my online presence, the third of which I was thinking about a month later because it hit my interests squarely

- only the third acknowledged my polite declining

- for the third, a month after, the email and website were offline

- the inquiries were quite restrained, having no links, and only asking if I was interested, and followed up tersely with an open door to my declining

I have no idea what's authentic online anymore, and I think it's dangerous to operate your online life with the belief that you can discern malicious written communications with any certainty, without very strong signals like known domains. Even realtime video content is going to be a problem eventually.

I suppose we'll continue to see VPN sponsorships prop up a disproportionate share of the creator economy.

In other news Google routed my mom to a misleading passport renewal service. She didn't know to look for .gov. Oh well.


Location: United States

Remote: in-office/hybrid preferred, open to remote

Willing to relocate: yes

Technologies: TypeScript, Svelte, HTML/CSS/JS, Postgres, SQL/NoSQL, Node/Deno/Bun, AI prompt engineering

Résumé/CV: https://docs.google.com/document/d/1Fti-__uwjazBllAqR73wrQ1l...

Email: mail@ryanatkn.com

I’m a design-minded software engineer with 13 years of experience as a fullstack web developer. My specialty is frontends with novel UX, heavy interactivity, real-time updates, and high performance. I enjoy prototyping, architecting high quality maintainable systems, creating custom dev tools, and technical writing. I like the workhorse role and I’m open to being a lead.

I have a lot of experience with open source, and I recently dipped a toe into technical video creation and blogging. I’ve also been integrating AI more in my workflows - in 2024, I experienced LLMs reaching the threshold needed to increase my coding productivity without losing quality for many kinds of problems.

I'm not actively searching for work, so for example I’m not looking for non-Svelte frontend roles, but I would jump at the right opportunity. Please reach out if you think I could be a good fit!


Reminder that malicious impersonation is common and easily automated with LLMs.


It doesn't even have to be malicious if you have a common username.


Counterfactual invisibility is a real bummer.


Also the heuristic used to collapse file diffs makes it so that the most important change in a PR often can't be seen or ctrl-f'd without clicking first.


Blame it on go dependency lists and similar.

What do you even review when it's one of those? There's thousands of lines changed and they all point to commits on other repositories.

You're essentially hoping it's fine.


Shipping code to production without evidence anyone credible has reviewed it at a minimum is negligence.


You're claiming here that you do a review of all of your dependencies?


For security critical projects, of course. I even reproducibly bootstrap my own compilers and interpreters.


I've always considered the wider point to be that viewing diffs inline has been a laziness inducing anti-pattern in development: if you never actually bring the code to your machine, you don't quite feel like it's "real" (i.e. even if it's not a full test, compiling and running it yourself should be something which happens. If that feels uncomfortable...then maybe there's a reason).


I think this minimizes the fact that interop - the main selling point to me as a user - comes at a performance cost where every component you use could have its own unnecessary runtime attached.[1] Using a framework like Lit with web components is the recommended way to use them.

This cost will compound over time where new frameworks emerge, and components get stuck on older versions of their frameworks.

I can't see this as anything but significant, and not to be minimized. Having multiple redundant libraries on a page is not the direction I would advise anyone to take, particularly not when baked into the accepted best practices. This bodes poorly in the long term.

I've listened to the arguments from web component advocates in blog posts, social media, and videos for years now, and I should be in the target market. But on top of the interop tax, they're full of negatives that aren't present in the mainstream frameworks.

Interop works great within each framework's ecosystem. The same dynamics that cause developers to seek interop cause them to huddle around a small number of mainstream frameworks. So we get a few vibrant ecosystems that push the state of the art together. Web components cannot keep up on the tech side of things, and introduce a ton of complexity to the web platform - ignorable to me as a dev, but not for browser implementers - in service of their early 2010s designs.

[1] https://x.com/Rich_Harris/status/1840116730716119356


I cover this in another post [1], but broadly:

- Not every web app is perf-sensitive to every extra kB (eCommerce is, productivity tools typically aren't)

- Plenty of frameworks have tiny runtimes, e.g. Svelte is 2.7kB [2]

- I wouldn't advocate for 100 different frameworks on the page, but let's say 5-6 would be fine IMO

No one is arguing that this is ideal, but sometimes this model can help, e.g. for gradual migrations or micro-frontends.

BTW React 17 actually introduced a feature where you could do exactly this: have multiple versions of React on the same page [3].

[1]: https://nolanlawson.com/2021/08/01/why-its-okay-for-web-comp...

[2]: https://bundlephobia.com/package/svelte@4.2.19

[3]: https://legacy.reactjs.org/blog/2020/10/20/react-v17.html


While this is true I think the multiple libraries problem is a rounding error when you look at the majority of web apps created today. React and react-dom combined are over 100KB. Svelte and Lit are in the single digits. So you could embed a lot of frameworks before you get close to the bloat people use every single day without even thinking about it.


As a Svelte user this argument rings hollow. You can't judge frontend by React and the way it's badly used.


> You can't judge frontend by React and the way it's badly used.

IMO you can because it’s the vast majority of webapp usage today. I’m also a heavy Svelte user and I love it but front end web dev is practically a React monoculture so it makes sense to think about it when evaluating options.

I’m not saying it isn’t a problem inherent in web components, it is. But using it as a reason to not adopt web components runs contrary to the logic the vast majority of the industry currently uses. Perfect as the enemy of good and all that.


React is irrelevant for me and my users. This is not an argument in favor of web components over Svelte. Adopting web components would mean an objectively worse UX for my users - for example requiring them to enable JS.

You won't get a Svelte to look past the flaws of web components by saying "React is bad".


Yes, you’re talking about you and your users. I’m talking about the industry at large. Those two perspectives don’t have to line up.

The article we’re discussing is titled “Web Components are okay”, not “Web Components are better than Svelte for webdevladder and their users”.


Look at the thread you've created here - I'm arguing that the article minimizes the antipattern cost they impose, and your response brings up React as if it somehow changes that.


Yes, I previously mentioned the “perfect as the enemy of good” argument.

Like I already said, I use and like Svelte. But the vast majority of the web dev ecosystem uses React. Web components would be better than everyone using React. Arguably everyone using Svelte could be better still but that’s a separate debate.

> your response brings up React as if it somehow changes that.

It does. Because the industry clearly has no problem with a large upfront cost, given that it imposes one today. Web components would be better than what we have today even if it isn’t the ideal.


You absolutely can judge tools by how they are used, especially if said tool encourages poor usage.


Except, any beginner can make a mess with anything. Methodologies and frameworks evolve, but not all create any for beginners.


You can judge React, but like I said, not frontend. You're responding to an argument I didn't make.


React has one up-front size for rendering code whether you use 1 component or 10,000 components.

Svelte and Lit rendering code size just keeps going up, and up, and up....

You can argue about which is better, but this kind of naive size comparison is disingenuous.


While it’s true that Svelte and Lit can grow in size dependent on project there’s no world in which even large projects get close to the base level of the React runtime.


Reddit rewrote a small part of their website with web components using lit. 100+ requests and over a megabyte of Javascript to render a side menu.

Because they probably did the "several runtimes don't matter" thing, and every tiny component loads the full lit runtime


I have no idea about that but if the figures you’re providing are correct it’s pretty obvious the answer is that they did it wrong. There is nothing in the web component API that would require 100+ requests nor several MB of JS, especially when you’re in control of every step in the process!


> every tiny component loads the full lit runtime

This is just not true.


Why does it need 100+ requests and over a megabyte of JS then?

Edit: when it was "unveiled" the sum total of JS was 178 requests totaling 1.37 MB: https://x.com/dmitriid/status/1777404560316707052


I don't know, but it only loads one copy of Lit.


Thank you, good to know


Only if you are doing it wrong. https://lit.dev/docs/tools/publishing/


A more broad observation, I'm being pointed in the parent comment - web components need to win over framework authors. The signs are not trending well here from what I've seen consistently. That community is on X and web components are not addressing their problems and they're not used in optimal scenarios. I hope web components can win them over but they're mostly saying they've been a failure, arguably on balance bad for the web.


I don't really understand this argument, to be frank. Most runtimes are pretty small, and there's not much of a performance overhead to both runtimes running at the same time. It's not like these are two realtime engines both purring along in the background or something like that. All modern web frameworks are reactive, and won't do anything unless something needs responding to. If one part of the page is built with React, another part is built with Lit, and a third part with Svelte, I don't see how that will have noticeably worse UX (or battery consumption) than a page made with just one framework, even when reactive triggers are frequently exchanged between them.

The tweet you quote is about whether web components are "useful primitives on which to build frameworks". I doubt many web component fans (who actually really used them) would say that they are. They're a distribution mechanism, and the only alternative I've seen from these framework authors is "just make the same library 7 times, once for React, once for Preact, once for Svelte, once for Solid, once for Vue, once for vanilla JS". This is awful.


You're ignoring page bloat as a performance cost. That's hugely impactful for UX on the web.


Not entirely, I said "Most runtimes are pretty small".

I think people got trained by React into thinking that frameworks are big. SolidJS is 7kb, Lit is 5kb, Svelte is tiny and used to have no runtime at all, etc. Only React is big. And, well, if you're writing React components and publishing them as web components, it's usually quite feasible to build them with Preact instead, which is tiny as well.

So on a page with like some hodgepodge of 5 frameworks purring along inside various web components, there's still going to be only 20-30 kb of extra overhead. You can compress one image slightly better and save more than that.


The point being made is that web components can pay this cost per-component, and this problem will compound over time. This is an unprecendented cost to frontend framworks and it's the expected usage pattern.


I've yet to see this go wrong in practice. The kinds of components that are worth publishing as web components are often large, non-trivial components. Eg media libraries, emoji pickers (like the one made by this article's author), chatboxes, and so on. They are the kinds of things you only have a limited number of on your page. They're also the kinds of things where application code tends to be much bigger than the framework (except if the framework is React).

On the other hand, if a component is small and focused in scope, it's likely either written in vanilla JS (like https://shoelace.style/), or made for a single framework (like the average React infinite scroll component).

In other words, I don't think you're wrong, but I do think you're prematurely optimizing a problem that doesn't really exist in reality. And the cost is big: if you get your way, every component author needs to either lock themselves into a single framework's users, or make 7-8 different versions of their component. I'd argue that that's much more wasteful than a few kb extra framework JS.


> or made for a single framework

And then each component will have the entirety of that framework packaged with it when you distribute them. Unless you take special care


It’s also just not actually true though. It’s not considered good practice to bundle your web components when publishing to npm for this exact reason. That’s something that should happen inside the final app itself where the components get used so you only have one instance of Lit for example if you are using that.


This just doesn't happen much. Usually a whole app shares one instance of Lit.

I did see one very badly configured app pull in six copies of Lit once because their bundler wasn't deduping, but: 1) that's still less than a React, and 2) an `npm dedupe` run fixed it.


This looks incredibly well-designed and documented, can't wait to watch some speed runs!


My experience tells me the opposite, it's an incredibly thoughtful and useful evolution IMO.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: