When I first heard that bun was written in zig, I thought that was an odd choice for such a large project, mostly because the language is "unstable" and is still making significant breaking changes.
I would guess dealing with breaking changes is a big motivation for this.
Arcane build system. I mean, I guess it technically supports CMake these days, but I have never been able to get anyone else's Qt project to build without much gnashing of teeth.
Emulated native widgets try for pixel-perfect, but tend to feel wrong somehow.
> Gtk
Outside of a Linux/Gtk native environment, Gtk applications are awful. Take GIMP on macOS, for example: it's had window focus issues (export dialog getting lost behind the main application window) literally ever since Gtk on macOS dropped the XQuartz dependence. And that's the flagship application for the toolkit.
So, your critique of Gtk sounds convincing, but about Qt, you seem to be admitting they're offering a less-horrible way to build than how things used to be.
I looked at this: https://doc.qt.io/qt-6/cmake-get-started.html
... and I'll admit they seem to be hiding some nasty stuff under the hood. But it still seems workable. I guess the devil is in the details?
That's one word that should never been used in an design meeting. None of the GUI I've used has managed to do this right. Even Emacs and Firefox. The platform are totally different (and in the case of Linux/Unix, there's a lot of different HIG competing). So trying to be cross platform is a good illustration of the lesson in https://xkcd.com/927/
The best bet should be a core with the domain with a UI shell. And then you swap the shell according to the platform.
I want my applications to look consistent across platforms. Why would I want discord for example to look entirely different between MacOS and Linux? With the current state of things, once I use the app anywhere, I'll know where everything is on any platform.
Take a good look around and check how often people do really change computer platform. And you already have so many things that are different that the "same look" is just an excuse. Gnome, KDE, macOS, Windows does not have the same UX in their file explorer which is a basic utility that everyone has to use. Same with connecting to a WiFi and creating a new user account.
So why would you want Discord to be consistent, when you're mostly using the same desktop (or switch between at most two) for hours.
The thing is when HIG were followed instead of everyone trying to create their "brand", everyone knows where the common actions were. You learned the platform and then can use any app. With the new trend, you would only have one computer, but any new app is a new puzzle to figure out.
I don't really have any issues working out how to use modern electron apps, they all follow very simple UX patterns, I find them much easier to use than the average native wxWidgets/qt app. Simple, consistent UI is less about the color scheme and border radius being consistent and about things being simple and well laid out on a higher level.
Two apps can have different CSS while being easy to understand because the core flows and ideas are the same. While many older native apps feel like junk draw UI with crap thrown everywhere and weird app specific quirks and patterns. Even if it all does use native inputs and windows.
Electron apps use the usual web 2.0 forced keyboard focus antipattern which breaks page up and page down scrolling, so they are difficult to use. Also blurry text rendering.
> The best bet should be a core with the domain with a UI shell. And then you swap the shell according to the platform.
I've rarely seen that turn out very well. Typically it works ok on whatever desktop main developers use, and not so much on the others. That means using multiple frameworks, witht their own idioms and quirks and having to repeat a lot of work. Unless your UI is very simple it is pretty expensive to maintain multiple separate versions of it.
The best way I’ve seen this implemented is having the domain be a library or a protocol/server. For a lot of saas, we already have people writing the mobile versions and the web version..
Yes. It’s more work than dumping Electron on users. Quality often is.
> don't block the first connection of the day from a given IP.
The bots come from a huge number of IP addresses, that won't really help that much. And it doesn't solve the UX problem either, because most pages require multiple requests for additional assets, and requiring human verification then is a lot more complicated than for the initial request.
> For the bots that perfectly match the fingerprint of an interactive browser
That requires properly fingerprinting the browser, which will almost certainly have false positives from users who use unusual browsers or use anti-fingerprinting.
> use hidden links to tarpits and zip bombs.
That can waste the bot users resources, but doesn't necessarily protect your site from bots. And Also requires quite a bit of work that small projects don't have time for.
Unless there is a prebuilt solution that is at least as easy to deploy and at least as effective as something like anubis, it isn't really practical for most sites.
> I assume then, that the only way a bot could even find my site is to do what the indexers do: brute force try every single possible ipv4 address hoping to hear something back, as my domain should not be known
If your site uses https, they could also get your domain from the certificate transparency logs for the certificate you use.
I didn't think of that, but that makes complete sense, as it is https. I think my info was sold by my registrar as well because solicitors call or email me on occassion because they "accidentally came across my site" and want to provide the design/js/etc help.
When ipv6 was first created DHCP and NAT were new and not widely deployed. They weren't trying to "fix" them, they solved the same problems independently.
And if you need NAT or DHCP, there isn't any reason you can't use them with ipv6. DHCP6 had been around for a long time.
that's not at all true. DHCP was very much part of the operational canon of the internet at the time, which is why it persisted as a model. V6 really wanted to back that out so that networks 'just worked' without depending on an administrator to manage that local service.
NAT was already in use, and a substantial motivation for the IPv6 work was to provide an alternative before it got too entrenched, which sadly failed.
it was first published in 1993. I know it was in common use because I got into into argument with one of the authors, Greg Minshall, in 1995 about how basing it on bootp was really a useless idea. and I used it at my first job which I left in 1992. I sat in on the v6 working group, and remember the discussion about what to do about it. Steve pretty much just drove the consensus as usual and no one had any real objections.
I remember reading about that a long time ago. I wonder why it never really caught on?
I think part of the problem is not so much a technical one, as a coordination issue. Who are you more likely to get on board? ISP and backbone providers. What is the path forward? Here is the recommended path forward, kind of thing.
I would guess dealing with breaking changes is a big motivation for this.
reply