Wanted to get some opinions from folks here that have actually built and shipped with Electron.
Background: Building an API IDE on Electron. Designing this to not be “just an API client”or a thin wrapper around a webapp. It’s a pretty original desktop tool with a lot of editor/IDE-like behavior: local workflows, richer interactions, and some things that I think would have been much harder to build and maintain in a more constrained setup. So yeah, thats why we went for Electron.
this is the tool: github.com/voidenhq/voiden :)
Now, as adoption is growing, we are starting to get the usual questions about memory footprint and app size.
The (slightly) frustrating part is that when the app is actually being used, the app-side memory is often pretty reasonable. In many normal cases we are seeing something like 50–60 MB for the actual usage we care about (even added it in the app itself for people to check it out).
But then people open Activity Monitor, see all the Chromium/Electron-related processes, and the conversation immediately becomes:
“yeah but Tauri would use way less”
And then, without realizing, I suddenly end up talking and philosophizing about Electron, instead of discussing the tool itself (which is what I am passionate about :)
Of course Electron also has overhead. Pretending otherwise would be foolish. So we are constantly optimizing what we can, and we will keep doing so…
At the same time, I do feel that a lot of these comparisons feel weirdly flattened. For example people often compare:
full Electron process footprint VS the smallest possible Tauri/native mental model
…without always accounting for development speed, cross-platform consistency, ecosystem maturity, plugin/runtime complexity, UI flexibility, and the fact that some apps are doing much more than others. Which is by the way the reason that we went with Electron.
So all this context to get to my real question, which is:
How do you explain this tradeoff to users in a way that feels honest and understandable, without sounding like you are making excuses for Electron?
And also, for those of you who have had this conversation a hundred times already:
What do you say when people reduce the whole discussion to “Electron bad, Tauri good”?
Have you found a good way to explain footprint in practical terms?
Mostly trying to learn how others think about this , especially those who have built more serious desktop products and had to answer these questions in the wild.
I think some of what is offensive about the Electron situation is that way too many Electron apps are things that live in the tray or try to hijack the application lifecycle. So these are not just burning up memory but they are burning up memory for some trivial thing in the tray and making your machine slow to boot and complicating UI for the tray.
built and open sourced a postman alternative, and decided to do it on Electron.
what has been funny is indeed when having to "defend" electron when it comes to memory footprint etc.
One thing that I always thought was interesting is that people make the argument by comparing the full Electron process footprint VS the smallest possible Tauri/native mental model and all without thinking about all the advantages of electron like development speed, cross platform consistency etc.
we have now optimized it a lot and we even show the actual usage inside the app for folks to monitor.
with my team we built Voiden for some of these reasons, initially for our own internal use (building many APIs for our SaaS marketplace). Most of the folks in the team have been postman power users before so we do remember the time when this was indeed something new.
Problem now is that most of the alternatives out there (including the ones you mentioned) do offer some great things but essentially they feel variations of the same concepts - so I see them as "Enshittification on the way". Reason we built voiden is that we wanted something that challenges these ideas. You can try it out and let me know if it resonates: https://github.com/VoidenHQ/voiden.
apologies for the slight promo - but based on your comment I thought it might be relevant.
Today I released the community plugins for Voiden.
This is a big one because one of the things I dont want is the API tool to become bloated with new features - so I want to allow anyone to build plugins to grow the tool.
Not sure if “code has always been expensive” is the right framing.
Typing out a few hundred lines of code was never the real bottleneck. What was expensive was everything around it: making it correct, making it maintainable (often underestimated), coordinating across teams and supporting it long term.
You can also overshoot: Testing every possible path, validating across every platform, or routing every change through layers of organizational approval can multiply costs quickly. At some point, process (not code) becomes the dominant expense.
What LLMs clearly reduce is the short-term cost of producing working code. That part is dramatically cheaper.
The long-term effect is less clear. If we generate more code, faster, does that reduce cost or just increase the surface area we need to maintain, test, secure, and reason about later?
Historically, most of software’s cost has lived in maintenance and coordination, not in keystrokes. It will take real longitudinal data to see whether LLMs meaningfully change that, or just shift where the cost shows up.
"What was expensive was everything around it" - when I say that code has always been expensive that's part of what I'm factoring in.
But even typing that first few hundred lines used to have a much more significant cost attached.
I just pasted 256 lines of JavaScript into the 2000s-era SLOCount tool (classic Perl, I have a WebAssembly hosted version here https://tools.simonwillison.net/sloccount) and it gave me a 2000s-era cost estimate of $6,461.
I wouldn't take that number with anything less than a giant fist of salt, but there you have it.
> when I say that code has always been expensive that's part of what I'm factoring in.
Fair, but when an LLM writes code in response to a prompt I really don't get the sense that it's doing as much of that "everything around" part as you might expect.
No, that’s why you have a bunch of prompts make artifacts before the prompt that writes code, and prompts afterwards that run tests on code, etc…if you are just vibe coding code with one prompt, it’s not going to work out very well.
It was a bit more than a domain name - they had 330 employees and $13.5 million in revenue for a quarter - but that acquisition was definitely peak dot-com boom.
i would love another bubble. i feel like tech has been in a corner for going on ten years now (covid spike was so brief). it's so concentrated in ai it sucks up everything.
> The long-term effect is less clear. If we generate more code, faster, does that reduce cost or just increase the surface area we need to maintain, test, secure, and reason about later?
My take is that the focus is mostly oriented towards code, but in my experience everything around code got cheaper too. In my particular case, I do coding, I do DevOps, I do second level support, I do data analysis. Every single task I have to do is now seriously augmented by AI.
In my last performance review, my manager was actually surprised when I told him that I am now more a manager of my own work than actually doing the work.
This also means my productivity is now probably around 2.5x what it was a couple of years ago.
> In my last performance review, my manager was actually surprised when I told him that I am now more a manager of my own work than actually doing the work.
I think this is very telling. Unless you have a good manager who is paying attention, a lot of them are clueless and just see the hype of 10x ing your developers and don't care about the nuance of (as they say) all the surrounding bits to writing code. And unfortunately, they just repeat this to the people above them, who also read the hype and just see $$ of reducing headcount. (sorry, venting a little)
This has been my experience, too. In dealing with hardware, I'm particularly pleased with how vision models are shaping up; it's able to identify what I've photographed, put it in a simple text list, and link me to appropriate datasheets. yday, it even figured out how I wanted to reverse engineer a remote display board for a just-released inverter and correctly identified which pin of which unfamiliar Chinese chip was spitting out the serial data I was interested in; all I actually asked for was chip IDs with a quick vague note on what I was doing. It doesn't help me solder faster, but it gets me to soldering faster.
A bit OT, but I would love to see some different methods of calculating economic productivity. After looking into how BLS calculates software productivity, I quit giving weight to the number altogether and it left me feeling a bit blue; they apply a deflator in part by considering the value of features (which they claim to be able to estimate by comparing feature sets and prices in a select basket of items of a category, applying coefficients based on differences); it'll likely never actually capture what's going on in AI unless Adobe decides to add a hundred new buttons "because it's so quick and easy to do." Their methodology requires ignoring FOSS (except for certain corporate own-account cases), too; if everyone switched from Microsoft365 to LibreOffice, US productivity as measured by BLS would crash.
BLS lays methodology out in a FAQ page on "Hedonic Quality Adjustment"[1], which covers hardware instead of software, but software becomes more reliant on these "what does the consumer pay" guesses at value (what is the value of S-Video input on your TV? significantly more than supporting picture-in-picture, at least in 2020).
2 things to think about here:
1. Coding is just one phase of the software development life cycle (SDLC). We still have to gather requirements, design, test, release, and most importantly maintain. I was taught, albeit years ago, that code spends most of its life in maintenance and that is the phase where the most money is spent.
2. Keep in mind Amdahl's law. The limit of software cost as coding cost approaches zero is the cost of the other phases of the SDLC. Apologies to Amdahl for the cheap, dirty bastardization.
Honestly, if I would just get what the end user “really wants” (they often don’t even know) that would save a huge percentage of the overall cost, and that’s not code that’s human nature.
That is also getting cheaper, you can now quickly present him with a few working prototypes so he can quickly make up his mind what best suites him.
Another problem is that most users want different things, that's why you get these big bloated software suites. With LLMs it now also becomes more achievable to build custom software per user.
code was expensive, is and will be expensive. the real cost is hidden. takes a mature eye to see a codebase that works and is not a dumpster fire.
correctness (doing what its supposed to, nothing else), maintainability (accommodating unknown future changes), cost ( deployment, refactoring, integrations) and performance (making the right tradeoffs) are not obvious, don't come naturally till you burn your fingers and differentiate a good from a horrible end result.
one of the creators here - you are right in the sense that most tools start open source and then they close source. For us it was different/ the opposite. We first made it as a product and then we open sourced it. It was never our intention to make it a SaaS though nor charge for it.
Wanted to get some opinions from folks here that have actually built and shipped with Electron.
Background: Building an API IDE on Electron. Designing this to not be “just an API client”or a thin wrapper around a webapp. It’s a pretty original desktop tool with a lot of editor/IDE-like behavior: local workflows, richer interactions, and some things that I think would have been much harder to build and maintain in a more constrained setup. So yeah, thats why we went for Electron.
this is the tool: github.com/voidenhq/voiden :)
Now, as adoption is growing, we are starting to get the usual questions about memory footprint and app size.
The (slightly) frustrating part is that when the app is actually being used, the app-side memory is often pretty reasonable. In many normal cases we are seeing something like 50–60 MB for the actual usage we care about (even added it in the app itself for people to check it out).
But then people open Activity Monitor, see all the Chromium/Electron-related processes, and the conversation immediately becomes:
“yeah but Tauri would use way less”
And then, without realizing, I suddenly end up talking and philosophizing about Electron, instead of discussing the tool itself (which is what I am passionate about :)
Of course Electron also has overhead. Pretending otherwise would be foolish. So we are constantly optimizing what we can, and we will keep doing so…
At the same time, I do feel that a lot of these comparisons feel weirdly flattened. For example people often compare:
full Electron process footprint VS the smallest possible Tauri/native mental model
…without always accounting for development speed, cross-platform consistency, ecosystem maturity, plugin/runtime complexity, UI flexibility, and the fact that some apps are doing much more than others. Which is by the way the reason that we went with Electron.
So all this context to get to my real question, which is:
How do you explain this tradeoff to users in a way that feels honest and understandable, without sounding like you are making excuses for Electron? And also, for those of you who have had this conversation a hundred times already:
What do you say when people reduce the whole discussion to “Electron bad, Tauri good”?
Have you found a good way to explain footprint in practical terms?
Mostly trying to learn how others think about this , especially those who have built more serious desktop products and had to answer these questions in the wild.
Would love your thoughts and advice,
reply