Summarizing, explaining pages directly, without copying to another app. Reading pages out aloud. Maybe even orchestrating research sessions, by searching and organizing...
The one line "article" on lwn.net has a link to this email:
From: Kent Overstreet @ 2025-09-11 23:19 UTC
As many of you are no doubt aware, bcachefs is switching to shipping as
a DKMS module. Once the DKMS packages are in place very little should
change for end users, but we've got some work to do on the distribution
side of things to make sure things go smoothly.
Good news: ...
> Once the DKMS packages are in place very little should change for end users
Doesn't that mean I now have to enroll the MOK key on all my work workstations that use secure boot? If so that's a huge PITA on over 200 machines. As like with the NVIDIA driver you can't automate the facility.
I have to constantly adjust my comfort level regarding what 'production' means. Consider the prep conditions, or 'prod', for your typical Chef or Butcher!
Anyway, fair question IMO. Another point I'd like to make... migrating away from this filesystem, disabling secure boot, or leaning into key enrollment would be fine. Dealer's choice.
The 'forced interaction' for enrollment absolutely presents a hurdle. That said: this wouldn't be the first time I've used 'expect' to use the management interface at scale. 200 is a good warm up.
The easy way is to... opt out of secure boot. Get an exception if your compliance program demands it [and tell them about this module, too]. Don't forget your 'Business Continuity/Disaster Recovery' of... everything. Documents, scheduled procedures, tooling, whatever.
Again, though, stability is a fair question/point. Filesystems and storage are cursed. That would be my concern before 'how do I scale', which comparatively, is a dream.
I am not going to advocate to put bcachefs on 200 production machines.
However, I would like to push back on that article.
It says that bcachefs is "unstable" but provides no evidence to support that.
It says that Linus pushed back on it. Yes, but not for technical reasons but rather process ones. Think about that for a second though. Linus is brutal on technology. And I have never heard him criticize bcachefs technically except to say that case insensitivity is bad. Kind of an endorsement.
Yes, there have been a lot of patches. It is certainly under heavy development. But people are not losing their data. Kent submitted a giant list of changes for the kernel 6.17 merge window (ironically totally on time). Linus never took them. We are all using the 6.16 version of bcachefs without those patches. I imagine stories of bcachefs data loss would get lots of press right now. Have you heard any?
There are very few stories of bcachefs data loss. When I have heard of them, they seems to result in recovery. A couple I have seen were mount failures (not data loss) and were resolved. It has been rock-solid for me.
> Eh? Linus has called it "experimental garbage that no one could be using" a whole bunch of times, based on absolutely nothing as far as I can tell.
Where did Linus call bcachefs "experimental garbage"? I've tried finding those comments before, but all I've been able to find are your comments stating that Linus said that
Don't you only have to do that once per machine? After that the kernel should use the key you installed for every module that needs it. It is a pain in the ass for sure, but if you make it part of the deployment process it's manageable.
For sure it's a headache when you install some module on a whole bunch of headless boxes at once and then discover you need to roll a crash cart over to each and every one to get them booting again, but the secure boot guys would have it no other way.
Yes, that's why all the cool kids switched to tmux 17 years ago. The only argument the screen camp had was "no serial port support in tmux". To which we answered something about a smaller more modern code base...
If we're talking about someone who has received a binary copy of software, then isn't this obvious?
The MIT license permits the distributor to close the source of what they've redistributed, in original or modified form. Potentially depriving the end user of the freedom to view/modify/distribute the source.
Permissive licenses prioritize rights of the software redistributors at the expense of the end users.
The MIT license permits some other developer to fork the source and close it off, but as an end user of this particular software that is under MIT (meaning that source is available, and I can take it and modify it if I need to), how does that affect me?
tmux is MIT-licensed, right? The MIT license is very similar to the (3-clause) BSD license which makes it upward-compatible with the GPL (you can incorporate MIT- or BSD-licensed code with GPL-licensed code).
Edit: and to your point of a distributor withholding the source: yeah, so? If there ever came a point where the current maintainer closed its source (unlikely), somebody with a copy of it can step in with a fork. Or the project can die a deserved death for closing its source. At this point the benefits of open source are pretty much obvious to anyone with a brain, and closing the source of an open-source project is practically suicide.
I switched to tmux and I switched back due to the weird server/session/window/pane model that makes no sense and prevents me from showing different windows or layouts on different clients. 4 levels of objects is ridiculous and when you end up with less capabilities than screen, what are we doing?
I would love to switch to a modern, maintained terminal multiplexer, but it would need to, well, be good at multiplexing.
A long time ago tmux was a little bit slower. But if you say that it's super slow for you now, then probably it's something to do with your config / setup.
I haven’t done serial port stuff in many years, so I ask this in ignorance and I give you permission to laugh at my naïveté. Can’t you just run tip or something in a tmux or zellij window and have basically the same thing?
That's really, really expensive imo and you could do it for way less, but given their current revenue stream that's 80 years of development if they took in no more money ever!
Now, I don't know how many it would take to program a browser but it's already written so it's not as hard as doing it from scratch so I reckon 20 good devs would give you something special.
Honestly, if someone said to me "Mick, here's $560M, put a team together and fork Firefox and Thunderbird. Pay yourself 250k and go for it"... I'd barely let them finish the sentence before signing a contract :)
It should be at least 100 devs at $250k each, which is still a severe underestimation. Note that there are many different types of mandatory expenses that roughly matches to the direct compensation, so with $150K you can only pay ~$75K. And you cannot attract senior browser devs at $75K annual compensation. This alone makes $25M year and the reality should be closer to $100M, which makes Mozilla's OPEX more plausible.
$250k is a staggering salary... not everyone lives in San Francisco. Or America for that matter.
The guys I work with are on about £95k and the good ones are very good.
I have seen what small teams of good devs can do with the right environment, scope, tools etc. (oh, and being left alone by interfering management!)
I'm talking about a cut-down Firefox, stripped of all the bullshit in the background, just a browser that shows webpages... all the heavy lifting is done: CSS engine, JS engine etc.
> $250k is a staggering salary... not everyone lives in San Francisco. Or America for that matter.
Still you need to spend at least $250K (which direct compensation would be close to $150K) to hire a competent browser dev. And I'm not speaking about SF... Well you can have better cost efficiency outside American metros, but the reality is that experienced browser devs are rare outside those areas.
> I have seen what small teams of good devs can do with the right environment, scope, tools etc.
Not objecting that disruptions can be done with a small focused team. But here we're talking about dealing with massive complexity, not an emerging market. You cannot "redefine" the problem here, the ecosystem is mess and we've got to live with it for a good time...
> I'm talking about a cut-down Firefox, stripped of all the bullshit in the background, just a browser that shows webpages... all the heavy lifting is done: CSS engine, JS engine etc.
You will be surprised to know how small the core engine parts are to the total code base. You may argue that most of those are not necessary and perhaps half of them are pretty much ad-hoc complexity but the rest have their own reason to exist. And the new browser engine developers typically learn this hard way then decides to fall back to Chromium. I've seen this several times.
Firefox has way more than 20 developers. Looking at https://firefox-source-docs.mozilla.org/mots/index.html, if I'm not mistaken in my count, there are currently 147 module owners and peers alone. Some of those might be volunteers, but I think the large majority of them are Mozilla staff. On top of that there are probably a number of further Mozilla staff developers who aren't owners or peers, QA staff, product managers, sysadmins and other support staff…
I know they have way more than that but I'd argue that you don't need that many.
Hypothetically, if I was given the money and asked to build a team to fork Firefox I'd be more focused. Way more!
The current devs work on stuff I'd scrap like Pocket, telemetry, anything with AI, and so on. I bet there is a load of stuff in there that I'd want out! There's probably a bunch of things in Firefox Labs they're working on too.
So, I'd argue that 20 good devs (again, a number I pulled out of the air!) split into, say, 4 smaller teams could achieve a shit load of work under the right circumstances, with the right leadership and so on.
I'm currently a senior architect with over 50 devs below me. Most are mid-level at best (not a slur, just where they are in their career!) but the few good ones are very good. A team of 20 of those could pull it off!
It'd be a tall order building a browser from scratch with 20 devs maybe but it's already built.
There's someone else right now who is going to important organizations they obviously don't understand, making wild claims about 'I could do it for much less', and cutting personnel drastically.
You severely underestimate the engineering cost of modern web browser. Assuming a sufficient value-addition fork, a team of 20 cannot even catch up the Chromium upstream. Good luck coming up with a new engine compatible with Chrome; MS tried it and finally gave up.
> I don't think it's a generation thing, I think it's that what we generally consider normal has changed, but that some people got left behind in the old normal.
Isn't that the definition of a "generational thing"?
Now I have to think every time, is this someone I have to text first? Or do they consider texting then calling redundant?
Anyhow, I think both are important communication techniques, adults should be able to do remote direct verbal and async written.
I take "generational" to mean different behavior patterns in different current generations. Of course, behaviors and norms can also change for most people over time.
Sophos was the latest scandal. Though, it's unclear to me to which degree their antivirus tools helped to install the malware. Maybe it was just the target selection from telemetry data. Maybe they used it to deploy the "kernel implant"?