Hacker Newsnew | past | comments | ask | show | jobs | submit | thayne's commentslogin

When I first heard that bun was written in zig, I thought that was an odd choice for such a large project, mostly because the language is "unstable" and is still making significant breaking changes.

I would guess dealing with breaking changes is a big motivation for this.


If only keyboard makers would just always put escape there.

It should always be "hold for control" and "tap for esc".

I don't think it is lack of investment necessarily, so much as not building the right thing.

What we need is a framework that is easy to use, cross platform, open source, and ideally can be used from your programming language of choice.


You are not going to believe this... (joking)

Are the available FOSS cross-platform frameworks really not that good?

There's at least Qt, GTK, umm, and, I guess Juce and wxWindows, right? Oh, I see there are more:

https://en.wikipedia.org/wiki/List_of_platform-independent_G...

Can you explain what's deficient about the first two I mentioned?


> Qt

Arcane build system. I mean, I guess it technically supports CMake these days, but I have never been able to get anyone else's Qt project to build without much gnashing of teeth.

Emulated native widgets try for pixel-perfect, but tend to feel wrong somehow.

> Gtk

Outside of a Linux/Gtk native environment, Gtk applications are awful. Take GIMP on macOS, for example: it's had window focus issues (export dialog getting lost behind the main application window) literally ever since Gtk on macOS dropped the XQuartz dependence. And that's the flagship application for the toolkit.


CMake support in Qt is perfectly fine nowadays. There are some (optional) custom commands you can use, but generally it's just plain CMake.

So, your critique of Gtk sounds convincing, but about Qt, you seem to be admitting they're offering a less-horrible way to build than how things used to be.

I looked at this: https://doc.qt.io/qt-6/cmake-get-started.html ... and I'll admit they seem to be hiding some nasty stuff under the hood. But it still seems workable. I guess the devil is in the details?


GTK 3 hello world is 150-200mb. They really messed up since GTK 2 was 30mb (like macOS AppKit).

GIMP itself is 62MB on my host, I'm not sure what kind of hello world you're building that's 3x that size.

Dunno, try it yourself. I wrote the hello world with C (12 lines?) and launched it on NixOS in Wayland (sway or niri). Maybe non-Wayland does better?

Granted, not the best measure of memory usage. But the GTK 2 version was 30mb.


Maybe GP means on non-Linux systems? When you have to "vendor" a large number of libraries?

> cross platform

That's one word that should never been used in an design meeting. None of the GUI I've used has managed to do this right. Even Emacs and Firefox. The platform are totally different (and in the case of Linux/Unix, there's a lot of different HIG competing). So trying to be cross platform is a good illustration of the lesson in https://xkcd.com/927/

The best bet should be a core with the domain with a UI shell. And then you swap the shell according to the platform.


I want my applications to look consistent across platforms. Why would I want discord for example to look entirely different between MacOS and Linux? With the current state of things, once I use the app anywhere, I'll know where everything is on any platform.

Take a good look around and check how often people do really change computer platform. And you already have so many things that are different that the "same look" is just an excuse. Gnome, KDE, macOS, Windows does not have the same UX in their file explorer which is a basic utility that everyone has to use. Same with connecting to a WiFi and creating a new user account.

So why would you want Discord to be consistent, when you're mostly using the same desktop (or switch between at most two) for hours.

The thing is when HIG were followed instead of everyone trying to create their "brand", everyone knows where the common actions were. You learned the platform and then can use any app. With the new trend, you would only have one computer, but any new app is a new puzzle to figure out.


I don't really have any issues working out how to use modern electron apps, they all follow very simple UX patterns, I find them much easier to use than the average native wxWidgets/qt app. Simple, consistent UI is less about the color scheme and border radius being consistent and about things being simple and well laid out on a higher level.

Two apps can have different CSS while being easy to understand because the core flows and ideas are the same. While many older native apps feel like junk draw UI with crap thrown everywhere and weird app specific quirks and patterns. Even if it all does use native inputs and windows.


Electron apps use the usual web 2.0 forced keyboard focus antipattern which breaks page up and page down scrolling, so they are difficult to use. Also blurry text rendering.

It's not even that cross platform is necessarily bad, it's that we have so many cross platform toolkits and they compete with native ones.

I think we'd all be better off if we just declared qt the standard gui library and rid ourselves of the chaos we find ourselves in


> The best bet should be a core with the domain with a UI shell. And then you swap the shell according to the platform.

I've rarely seen that turn out very well. Typically it works ok on whatever desktop main developers use, and not so much on the others. That means using multiple frameworks, witht their own idioms and quirks and having to repeat a lot of work. Unless your UI is very simple it is pretty expensive to maintain multiple separate versions of it.


The best way I’ve seen this implemented is having the domain be a library or a protocol/server. For a lot of saas, we already have people writing the mobile versions and the web version..

Yes. It’s more work than dumping Electron on users. Quality often is.


> don't block the first connection of the day from a given IP.

The bots come from a huge number of IP addresses, that won't really help that much. And it doesn't solve the UX problem either, because most pages require multiple requests for additional assets, and requiring human verification then is a lot more complicated than for the initial request.

> For the bots that perfectly match the fingerprint of an interactive browser

That requires properly fingerprinting the browser, which will almost certainly have false positives from users who use unusual browsers or use anti-fingerprinting.

> use hidden links to tarpits and zip bombs.

That can waste the bot users resources, but doesn't necessarily protect your site from bots. And Also requires quite a bit of work that small projects don't have time for.

Unless there is a prebuilt solution that is at least as easy to deploy and at least as effective as something like anubis, it isn't really practical for most sites.


> I assume then, that the only way a bot could even find my site is to do what the indexers do: brute force try every single possible ipv4 address hoping to hear something back, as my domain should not be known

If your site uses https, they could also get your domain from the certificate transparency logs for the certificate you use.


I didn't think of that, but that makes complete sense, as it is https. I think my info was sold by my registrar as well because solicitors call or email me on occassion because they "accidentally came across my site" and want to provide the design/js/etc help.

You can get around this by grabbing a wildcard certificate and then using a hard-to-guess subdomain.

You can have ads without tracking.

When ipv6 was first created DHCP and NAT were new and not widely deployed. They weren't trying to "fix" them, they solved the same problems independently.

And if you need NAT or DHCP, there isn't any reason you can't use them with ipv6. DHCP6 had been around for a long time.


that's not at all true. DHCP was very much part of the operational canon of the internet at the time, which is why it persisted as a model. V6 really wanted to back that out so that networks 'just worked' without depending on an administrator to manage that local service.

NAT was already in use, and a substantial motivation for the IPv6 work was to provide an alternative before it got too entrenched, which sadly failed.


The RFC for dhcp was published in 1997, two years after the first RFC for IPv6, and three years after work on IPv6 started.

it was first published in 1993. I know it was in common use because I got into into argument with one of the authors, Greg Minshall, in 1995 about how basing it on bootp was really a useless idea. and I used it at my first job which I left in 1992. I sat in on the v6 working group, and remember the discussion about what to do about it. Steve pretty much just drove the consensus as usual and no one had any real objections.

There isn't any reason you can't set up a NAT like that with ipv6.

And it added those 16 bits in a way that causes a lot of problems

> But in an ipv42 type setup, you would have determnistic embedding so that every ipv4 address is represented inside the larger address space

IPv6 supports that, but it ended up not getting used very much.

See https://en.wikipedia.org/wiki/List_of_IPv6_transition_mechan...


I remember reading about that a long time ago. I wonder why it never really caught on?

I think part of the problem is not so much a technical one, as a coordination issue. Who are you more likely to get on board? ISP and backbone providers. What is the path forward? Here is the recommended path forward, kind of thing.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: