That's fine, if we only care about ourselves. I guess the harder part is convincing everyone else to unplug from mass media and not raise their kids on it.
As someone who saw what impact WPF had on average users running average hardware in the late 2000s to early 2010s, I disagree.
In 2011, my brother was in seminary, using an average Windows Vista-era laptop that he had been given in 2008. When he was home for Christmas in 2011, we were talking about his laptop, and he told me that the Logos Bible software ran sluggishly on that laptop. He said something about how, for reasons unknown to him, the current version of Logos required advanced graphics capabilities (I forget exactly how he phrased it, but he had learned that the slowness had something to do with graphics). Bear in mind, this is software that basically just displays text, presumably with some editing for adding notes and such. At the time, I just bought him another laptop.
A few years later, I happened to read that Logos version 4 was built on WPF. Then, remembering my brother, I found this Logos forum thread:
This shows that Logos users were discussing the performance of Logos on machines with different graphics hardware. For a program that was all about displaying and editing text, it shouldn't have mattered. WPF had made a bet on then-advanced graphics hardware for reasonable performance, and that was bad for these users. And that's just the one example I know about.
"WPF had made a bet on then-advanced graphics hardware for reasonable performance, and that was bad for these users. "
OTOH WPF is today surprisingly strong GUI platform if you just want to get your Windows GUI out there.
It runs really nicely even on low end hardware. All the nice styling and blending techniques now _just work_ even on the most cheap low end laptop.
The fact it's over decade old means all the LLM:s actually know really well how to use it.
So you can just guide your LLM to follow Microsoft best practices on logic development and styling and "just add this button here, this button here, add this styling here" etc.
It's the least annoying GUI development experience I've ever had (as a dev, non-designer).
Of course not portable out of the box (avalonia is then the ticket there).
If you want 3D, you can just plug in OpenTK with OpenGL 3.3. Decades old _but good enough for almost everything_ if you are not writing a high perf game.
Really, WPF plus OpenTK is a really robust and non-surprising development platform that runs from old laptops (eg. T14 Gen 2 per my testing) onwards.
I've been doing a sideproject using WPF and OpenTK - .net works really great - here is a sample video of the whole stack (from adashape.com)
I had the misfortune of writing a complicated WPF app from scratch circa 2010-2011. Performance using the WPF widgets was terrible compared to HTML/Javascript/Blink; we ended throwing away most of the WPF code other than the main shell and a few dialogs, reimplementing the importantant stuff with immediate-mode Direct3D/Direct2D to get the necessary speed.
I recall wasting a lot of time staring at decompiled .NET bytecode trying to understand how to work around many problems with it, and it was clear from the decompiler output that WPF's architecture was awful...
It goes back pretty far. Nowadays the controversy is electron vs native (where most windows devs would consider WPF/.NET a native option).
But if you read books from the 2000s, there was much discussion about the performance overhead of a VM and garbage collected language; something like WinForms was considered the bloated lazy option.
I’m sure in a few years computers will catch up (IMO they did a while ago actually) and Electron will be normal and some new alternative will be the the bloated option - maybe LLMs generating the UI on the fly à la the abomination Google was showing off recently?
FWIW Apple has made a similar transition recently from the relatively efficient AppKit/UIKit to the bloated dog that is SwiftUI.
My lived experience. Maybe bloated isn’t the right word, but attention to performance just isn’t there. Try using any swift UI app on iPhone or Mac. Try resizing a swift UI app window on Mac.
Yeah, it's not bloated, there are just a lot of surprising and weird performance holes, especially on macOS. Even on iOS there's dumb things like, if your List cell's outer view isn't a specific type, List won't optimize for cell reuse, and it will start dequeuing cells for every item in the List eagerly. Wrap your actual cell type with a VStack or something and it will work properly, only dequeuing visible cells. It can be really nice to work with, but man, some of the implicit behavior, performance other otherwise, is shocking.
I do not think the current computers can catch up with Electron. When it is just one or two simple app it is ok, but when everything is built with Electron (which is happening now) then it is not enough even with 32gb+ ram.
I would argue that was less that WPF was the wrong life choice and more that Microsoft shouldn't have bent the knee to Intel's antitrust push to say their crap hardware was sufficient. [1]
Your argument presupposes that we should accept escalating baseline hardware requirements as good or even necessary, for a desktop computing world that was, from the user's perspective, doing pretty much the same thing as before. I reject that.
My recollection of current events at the time was that you were already having a dogshit experience using computers for many common things in the XP era with underpowered video hardware and trying anything complex in a browser, or Flash things with a lot of assets, so it's less forcing escalating a baseline and more recognizing the realities of what people were already expecting in a "good" computer and building thing that could take advantage of that.
I would agree it should have degraded much more gracefully and more readily than it did, but I'm quite confident we hadn't hit the point of minimal returns on improvements in hardware that would be necessary for such an argument yet.
Hell, I probably wouldn't make that argument until large amounts of RAM and VRAM (or unified RAM) are ubiquitous, because so many workloads degrade so badly with too little of either.
What does "GPU" mean here? Previous uses of the term seemed to imply "dedicated hardware for improving rendering performance" which the SVGA stuff would seem to fall squarely under.
The term GPU was first coined by Sony for the PlayStation with its 3D capabilities, and has been associated with 3D rendering since. In some products it stood for Geometry Processing Unit, again referring to 3D. Purely 2D graphics coprocessors generally don’t fall under what is considered a GPU.
It has been associated with 3D rendering, but given that things like the S3 86C911 are listed on the Wikipedia GPU page, saying "Accelerated GUIs don't need GPU" feels like attempting to win an argument by insisting on a term definition that is significantly divergent from standard vulgar usage [1], which doesn't provide any insight to the problem originally being discussed.
[1] Maybe I've just been blindly ignorant for 30 years, but as far as I could tell, 'GPU' seemed to emerge as a more Huffman-efficient encoding for the same thing we were calling a 'video card'
I don’t agree with what you state as the vulgar usage. “Graphics card” was the standard term a long time, even after they generally carried a (3D) GPU. Maybe up to around 2010 or so? There was no time when you had 2D-only graphics cards being called GPUs, and you didn’t consciously buy a discrete GPU if you weren’t interested in (3D) games or similar applications.
In the context of the discussion, the point is that you don’t need high-powered graphics hardware to achieve a fast GUI for most types of applications that WPF would be used for. WPF being slow was due to architectural or implementation choices.
Most people consider GPU to mean "3D accelerator" though technically it refers to any coprocessor that can do work "for" the main system at the same time.
GPU-accelerated GUI usually refers to using the texture mapping capabilities of a 3D accelerator for "2D" GUI work.
Calling that "the GPU acceleration" on Mac OS X was a bit overstating the things. It supported rotations, compositing, and some other bulk operations, but text and precise 2D graphics was rendered on the CPU.
It _still_ is not trivial to render high-quality 2D graphics on the GPU.
A notable example I remember from around 2010 was when Evernote dropped WPF, supposedly due to blurry text issues but probably also performance (remember when we called it EverBloat?)
Can't find the original blog post about it but here's a couple mentions of it:
Blurry fonts was my main issue with WPF. I get headaches from blurry text and the colour bleeding from ClearType just makes the headache worse.
Fortunately for me, I had mostly switched to Linux by that time already, where it was at the time relatively easy to just enable grey scale AA with full hinting.
In recent years this has gotten worse again with modern software incorrectly assuming everyone has a High DPI monitor. My trick has been to use bitmap fonts with no AA, but that broke in recent versions of electron, where bitmap fonts are now rendered blurry. So I had to stay on an old version of vscode from last year, and I will be looking to switch to another editor (high time anyway for other reasons).
WPF originally had two major rendering issues. One was the lack of pixel snapping support, and another was gamma correction issues during text rendering, particularly for light text on a dark background (due to an alpha correction approximation, IIRC). The two combined led to blurry text in WPF applications.
These were finally improved for WPF 4, since Visual Studio 2010 switched to it and had a near riot in the betas due to the poor rendering in the text editor.
Yes, text shaping and layout are complex. My point is that the program wasn't doing anything that should have required a GPU, particularly for the resolutions that were common back then.
The promise was that WPF would use hardware-accelerated libraries such as DirectWrite to put text on the screen even faster than GDI+ (using the CPU) could do. The reality turned out to be quite different: multiple layers of abstraction and just plain inefficient WPF code [1] meant that users needed powerful CPUs and GPUs just to get reasonable performance.
I had an Xperia for a while but kept my iPhone and then installed the iOS 26 beta on it. Expectation was that it would run but be pretty painful. Surprisingly, even the beta ran so fine that I sold the Xperia and switched back to iOS. And it's still my daily driver.
Not really. Ironically, WPF designers wanted to make things better by offloading the rendering onto the GPU. They also added DirectWrite that was supposed to provide high-quality text rendering with all the bells and whistles like layout for languages with complex scripts, in hopes of eventually offloading the text rendering as well.
But they just plain failed to execute well on this idea.
The fact that software to show the bible needs a GPU is funny in some kind of dystopian way. That is the kind of software that should work in 50k of memory.
Hey friend, check the user name of the person I'm responding to (and perhaps check out the people responsible for dtrace and larry ellison lawnmower comparisons). I might appear more coherent afterwards.
I hope we can still get to a point where wasm modules can directly access the web platform APIs and get JS out of the picture entirely. After all, those APIs themselves are implemented in C++ (and maybe some Rust now).
Have we really reached the limit of how much we can reliably automate these things via good old metaprogramming and/or generator scripts, without resorting to using unreliable and expensive statistical models via imprecise natural language?
> Refusing to use AI out of principle is as irrational as adopting it out of hype.
I'm not sure about this. For some people, holding consistently to a principle may be as satisfying, or even necessary, as the dopamine hit of creation mentioned in the article.
I spend a lot of time side by side with other devs watching them code and providing guidance. A trend I'm starting to sense is that developer velocity is just as much hindered by unfamiliarity with their tools as much as it is wrestling with the core problem they really want to solve.
When to use your mouse, when to use your keyboard, how to locate a file you want to look at in your terminal or IDE, how to find commands you executed last week, etc. It's all lacking. When devs struggle with these fundamentals, I suspect the desire to bypass all this with a singular "just ask the LLM" interface increases.
So when orgs focus on a "devs should use LLMs more to accelerate", I really wish the focus was more "find ways to accelerate", which could more reliably mean "get more proficient with your tools".
I think there's a lot of good that can be gained from formalizing conventions with templating engines (another tool worth learning), rather than relying on stochastic template generation.
Every little detail matters though. In SQL, do you want your database field to have limited length? If so, pay attention to validation, including cases where the field's content is built up in some other way than just entering text in a free-form text field (e.g. stuffing JSON into a database field). If not, make sure you don't use some generic "string" field type provided by your database abstraction layer that has an implicit limited length. Want to guess why that scenario's on my mind? Yeah, I neglected to pay attention to that detail, and an LLM might too. In CSS, little details affect the accessibility of the UI.
So we need to pay attention to every detail that doesn't have a single obviously correct answer, and keep the volume of code we're producing to a manageable enough level that we actually can pay attention to those details. In cases where one really is just literally moving data from here to there, then we should use reliable, deterministic code generation on top of a robust abstraction, e.g. Rust's serde, to take care of that gruntwork. Where that's not possible, there are details that need our attention. We shouldn't use unreliable statistical text generators to try to push past those details.
> So we need to pay attention to every detail that doesn't have a single obviously correct answer
I really, really wish that were the case. But look at the modern web. Look at iOS apps. Look at how long discord takes to launch on a modern computer. Look how big and slow everything is. Most end user applications released today do not pay attention to those small details. Definitely not in early versions of the software. And they're still successful. At least, successful enough.
I'd love a return to the "good old days" where we count bytes and make tight, fast software with tiny binaries that can perform well even on 20 year old computers. But I've been outvoted. There aren't enough skilled programmers who care about this stuff. So instead our super fast computers from the future run buggy junk.
Does claude even make worse choices than many of the engineers at these companies? I've worked with several junior engineers who I'd trust a lot less with small details than I trust claude. And thats claude in 2026. What about claude in 2031, or 2036. Its not that far away. Claude is getting better at software much faster than I am.
I don't think the modern software development world will make the sort of software that you and I would like to use. Who knows. Maybe LLMs will be what changes that.
> But look at the modern web. Look at iOS apps. Look at how long discord takes to launch on a modern computer. Look how big and slow everything is. Most end user applications released today do not pay attention to those small details. Definitely not in early versions of the software. And they're still successful. At least, successful enough.
The main issue is that we have a lot of good tech that are used incorrectly. Each components are sound, but the whole is complex and ungainly. They are code chimeras. Kinda like using a whole web browser to build a code editor, or using react as the view layer for a TUI, or adding a dependency just to check if a file is executable.
It's like the recently posted project which is a lisp where every function call spawn a docker container.
I disagree on Kubernetes versus ECS. For me, the reasons to use ECS are not having to pay for a control plane, and not having to keep up with the Kubernetes upgrade treadmill.
To replace Kubernetes, you inevitably have to reinvent Kubernetes. By the time you build in canaries, blue/green deployments, and rolling updates with precise availability controls, you've just built a bespoke version of k8s. I'll take the industry standard over a homegrown orchestration tool any day.
It used be Google Deployment Manager but that's dead soon so terraform.
To roll back you tell GCE to use the previous image. It does all the rolling over for you.
Our deployment process looks like this:
- Jenkins: build the code to debian packages hosted on JFrog
- Jenkins: build a machine image with ansible and packer
- Jenkins: deploy the new image either to test or prod.
Test deployments create a new Instance Group that isn't automatically attached to any load balancer. You do that manually once you've confirmed everything has started ok.
The amount of tools and systems here that work because of k8s is signficiant. K8s is a control plane and an integration plane.
I wish luck to the imo fools chasing the "you may not need it" logic. The vacuum that attitude creates in its wake demands many many many complex & gnarly home-cooked solutions.
Can you? Sure, absolutely! But you are doing that on your own, glueing it all together every step of the way. There's no other glue layer anywhere remotely as integrative, that can universally bind to so much. The value is astronomical, imho.
> the actual object-level question ("is this tool useful for this task")
That's not the only question worth asking though. It could be that the tool is useful, but has high negative externalities. In that case, the question "what kind of person uses/rejects this" is also worth considering. I think that if generative AI does have high negative externalities, then I'd like to be the kind of person that rejects it.
reply