It has nothing to so with NTFS, but all with the Win32 API. The Windows kernel supports this file model, proven by WSL1. There is a blog post somewhere (Old New Thing?) stating the engineers would like to e.g. allow deleting a file even if there is still a program with with a file handle to it, but are concerned deviation from current behavior would cause more problems than it solves.
The reason that they want a reboot is that they do not want to support a system using two versions of the same library at the same time, let's say ntdll. So they would have to close any program using that library before programs that use the new version can be started. That is equivalent to a reboot.
And I completely understand the reason. For a long time when Firefox would update on Linux, the browser windows still open were broken; it opened resources meant for the updated Firefox with the processes runnung the non-updated Firefox. The Chrome developers mentioned [2] that the "proper" solution would be to open every file at start and pass that file descriptor to the subprocesses so all of them are using the same version of the file. Needless to say, resource usage would go up.
> This ticket is rather long and has a lot of irrelevant content regarding this new topic. If I need to bring in a colleague I do not want them to have to wade through all the irrelevant context. If you would like, please open a new issue with regards to how we support middlebox compatibility.
The author turns this into:
> The GitHub issue comment left at the end leads me to believe that they aren't really interested in RFC compliance. There isn't a middleground here or a "different way" of implementing middlebox compatibility. It's either RFC compliant or not. And they're not.
This is a bad-faith interpretation of the maintainer's response. They only asked to open a new, more specific issue report. The maintainer always answered within minutes, which I find quite impressive (even after the author ghosted for months). The author consumed the maintainer's time and shouldn't get the blame for the author's problems.
I don't know, I don't think it's really a huge waste of time considering I just read the entire comment thread in a handful of minutes. And beyond that, failing to comply with RFC requirements is the bug here -- a workaround existing for a specific language isn't a fix.
Again: the maintainer does not say there is no bug. He says: please open a new issue, with a proper title and description for the actual underlying problem. Is that seriously too much to ask? Instead, the guy writes a whole blog post shitting on the project. Does anyone still wonder why people burn out on maintaining FOSS projects?
For both of them! Since both of them are aware now, either one could open that ticket. If the maintainer has very specific ideas about how a ticket should look, maybe they can do that themselves quickly, now that they are aware of not complying with the RFC. Then the ticket will perfectly match their expectations.
The maintainer is usually also the one who has to trace the root cause, which in this case the issue reporter did, which is certainly more work than creating an issue according to the formatting and other requirements the maintainer may have. So in that light, the reporter of the issue already did a big chunk of work for the maintainer or the project. I wouldn't really call them acting "entitled" after that. Clearly they put in effort more than could be expected already.
Exactly, that's all his PR had to be. The history of finding the issue could be an interesting story (I bet it involves Elixir!), but in places it reads as almost malicious. If I received a PR anything like that on something I maintained, it would be received very poorly. The author comes off as overly aggressive toward the maintainers and far too sensitive to their response.
It's pretty standard to open a new issue and reference the previous issue for context, while keeping the new issue specific about what needs to be addressed - ie. RFC compliance.
I don't see the problem here at all - it was a reasonable request and it would have taken `feld` all of 2 minutes to do. Certainly less time than writing that blog post.
It's not entirely WolfSSL's fault. TLS 1.3 is a mass of kludges and hacks to deal with the fact that they created a new protocol that's nothing like TLS 1.0-1.2 but dressed it up to make it look like TLS 1.2. It even lies about its protocol version in the handshake, hiding the real version in one of the many extensions they had to invent to kludge it into working. And in terms of RFC compliance, one of the most widely-used implementations isn't compliant, it doesn't send any of the mandatory-to-implement cipher suites in its client hello which means unless you want to trigger a rehandshake on every single connect you have to implement their non-compliant form of TLS 1.3.
The real problem though is that they made a protocol that really, really wants to pretend it's TLS 1.2 when it really isn't anything like TLS 1.2. I wouldn't blame "middleboxes" for getting confused when they encounter that.
The problem is there are many middleboxes that monitor port 443 and will drop any traffic that they can't decode as TLS (which in this case means TLS 1.2 or below). The choice was between masking traffic as an earlier version of TLS or forcing the replacement of all of those middleboxes. It's a no-brainer.
Then don't put it on 443 and pretend (badly) that it's TLS 1.2. Given that QUIC also uses 443 (and 80) without too many problems and that doesn't look anything remotely like TLS, presumably non-TLS 1.2 traffic to 443 is OK.
The problem isn't really the port used, it's the uncanny-valley approach they took in creating something that looks like a creepy zombie version of TLS 1.2, which keep-suspicious-things-out appliances quite rightly get suspicious over.
But QUIC doesn’t use 443/TCP; it uses 443/UDP. So it’s unsurprising that middleboxes that care about 443/TCP would ignore it. That doesn’t support your claim that “non-TLS 1.2 traffic to 443 is OK.”
The point I was trying to make, probably badly, was that there was no need to make TLS 1.3 pretend to be TLS 1.2 going to TCP/443. They could have picked some new port, called it TLS 2.0 (which is what it actually is), and run with that. If QUIC can pick its own port and, by the looks of it, not run into massive problems, there's no reason why TLS 2.0 can't do so too.
> wants to pretend it's TLS 1.2 when it really isn't anything like TLS 1.2.
I've seen a ton of this recently as Amazon has the option for TLS 1.3 with post quantum encryption on cloudfront now. A whole ton of different middleware shits itself.
A reasonable reply indeed from the maintainer, this happens a lot where you think together in an issue and identify whats really wrong near the end. Only then is one able to articulate an issue in a helpful, concise way. Perhaps GH could add a feature to facilitate this pattern.
The author has spent a lot of time on this as well. I can see both sides. From the author's perspective their focus is their product/system. Any extra time they spend is not contributing to that. They've already spent a fair amount of time helping root cause the issue and from their perspective once it's clear what the issue is they're done. The author also seems to work on open source. In this case they are the customer of a product, granted an open source one, and they've helped the vendor (the maintainer) figure out something is broken. Their expectation is that the vendor takes things on from there and doesn't put up some bureaucracy.
That said ofcourse you paid nothing for this and you should expect nothing but the OSS project also has no expectations that their customers support them if those customers aren't getting their expectations met. In today's world one unhappy customer can give you a pretty bad rep, as is happening here. Now if you don't care then you don't care. But the argument that because your product is "free" then your customers have no voice doesn't sound that great either.
Everyone seems to be pointing how the author disappeared and came back much later. Well, they disappeared because it wasn't a problem or they've worked around it, and came back when they hit the problem again. Just like the maintainer doesn't work for the author the author doesn't work for the maintainer either.
It's also true the ticket now has a lot of history, but the original bug is still the same bug, it's just that now it has been root caused? The maintainer's response of now that you've found a setting that works around the issue you're good and we can close this also is a bit off. And sure, they don't work for anyone so they're welcome to do whatever they want.
As isn't uncommon when two humans communicate online there is some miscommunication here. But you can argue either way. Not being an open source maintainer I don't know what the "protocol" here is but the few times I've filed bugs against an open source product I did personally put in the extra mile to make them actionable. But in my day job I have to deal with all sorts of bug reports and chasing them down to a resolution is part of what makes the product I work on a better one. And yes, I get paid to do that ;)
What tis missing there is a support team and maybe a difference between a customer (user) facing support system and a big tracker.
Support guides the user through the discovery process, which can be messy and go circles, and the result of that is a big which is actionable by a developer.
> The maintainer should just open a new issue for RFC compliance himself since that's a pretty big issue and he obviously thinks OP spams too much.
Reading the issue tracker, why would he do that unless he could repro?
> Hi @feld , I can't really tell if this is related to the ticket that you pointed out. I'll be helping you with this issue as well as looking into the other ticket. Can you give me step by step instructions on how to reproduce what you are seeing? Please note that I have limitted experience with HAProxy and Erlang.
> ...
> I've successfully connected to the server with the examples/client/client and I cannot reproduce what you are seeing. I've built with both WOLFSSL_TLS13_MIDDLEBOX_COMPAT defined and undefined.
He only gets a reply six months later!
This, I feel, clearly shows Feld's intentions - he wasn't interested in agetting it fixed, it was not a bug for him, but he was interested in spreading the word about it. i.e. To me, anyway, it looks like Feld is more interested in writing outrage-bait than getting a working product.
I've used WolfSSL in anger and the experience was much better than OpenSSL and AWS-lc.
Looking at the ticket itself, I consider the responses from the dev team to be pretty good support - better than some paid products I have used.
If the maintainer just opens the concise bug report they want (RFC .... Section ... If TLS1.3 is negotiated and client sends session id, server must send cipherchangespec), they have what they want and can move on with their life.
However, if the maintainer can get the reporter to do it, the reporter has become a better reporter and the world has become a better place.
IMHO, the original bug report was pretty out there. Asking a library developer to debug a client they don't use with a sever they didn't write either is pretty demanding. I know openssl has a minimal server, I expect woflssl does too? that would be easier to debug.
Actually, on re-reading the original report, the reporter links to a discussion where they have all the RFC references. Had the reporter summarized that to begin with, rather than suggesting a whole lot of other stuff (like a different wolfssl issue that has to be completely unrelated), I think the issue would have gone better.
I will further add that putting a MUST in an appendix seems kind of poor editing. It should have been noted in section 4.1.2 and/or 4.1.3 that a non-empty legacy_session_id indicates that the server MUST send a cipher change spec. It's not totally obvious, but if the client requests middlebox compatability, the RFC says the server MUST do it. If the client doesn't request it by sending a legacy session id, the server can still send a superfluous change cipher spec message if it wants, although I don't know if it will help without the session id.
Out of interest, how is that relevant? Are we not able to criticize a FOSS maintainers response unless we run a project of scale ourselves? The maintainer is clearly engaging and knows what the problem is but stalls on the "last mile" which is issue creation. Do you agree?
wolfSSL also sells commercial licenses so it's not like they're going uncompensated for their work. Regardless, we shouldn't put people on pedestals because their title is "FOSS maintainer"
You know a social movement went full circle when a criticism that is so scathing, you couldn't have possibly come up with it and make it trend before, even if you gave it your all, is now a motto and a point of pride for those who follow it.
This is happening at the same time where hundreds of millions of regular variety consumers are being fed propaganda daily about how it's "finally time to switch to Linux", because it's so much better for them, the individual. If only they knew it's apparently not actually about them, never has been, and never will be.
When exactly is 'before'? Before Github existed to put front and center your code and its issues? Before it became an expectation to have a a rich Github profile when you're considered for a job position?
Of course I wouldn't have been able to come up with this statement because the perverted view of OSS devs owing free work to the users of their software was not so pervaisive.
On your edit: a bit rich saying the calls for switching to Linux propaganda, especially with the downturn of UX of windows and macos... Also why just hundreds of millions.. Go for hundreds of billions if you're just going to pull out numbers. Apart from that - even if Linux is not about the users, it is in many cases better for them as-is. Funny how that works with no conflict.
> Also why just hundreds of millions.. Go for hundreds of billions if you're just going to pull out numbers
You see, that would be because I did not just pull out an arbitrary number. "How many Windows users there are" is a reported fact you can just search for, and even the total is not "billions" (plural). I know, I was surprised too. From the horse's mouth: https://blogs.windows.com/windowsexperience/2025/06/24/stay-...
My first comment on this site pointing out that a FOSS user sounds entitled is from 2021. I've been saying it outside the site for 10+ years, spanning back to the time when it wasnt cringe to have a Github sticker on your laptop.
I maintain several FOSS projects, although none as popular as wolfssl and if I want to make a new issue to make it more clean, I usually do it myself, because then I can write it the way I want, and include the information, and only the information, that I think is important. If I ask someone else to do it, there's a pretty good chance they won't write it the way I would like, if they write it up at all.
That's actually impossible to answer. I maintain or contribute to or have contributed to several FOSS projects whose number varies depending on how you want to count them, and neither myself nor anyone else who contributes to any FOSS project has the faintest idea how many people use them, especially if they're included in widely-used distros where the number is anything from zero to $number_of_distro_users.
Presumably, the maintainer wants the best for the product and its users. So they have a definite interest in documenting a todo list.
Presumably, the user wants the best for the product and their ability to use the product. So they have a definite interest in documenting a todo list.
It doesn't make sense for the two to be at war with each other. It is no big deal for the maintainer to ask a favor. It's not too big of a deal for the user to decline. There's no need to attack.
I have often dropped a note to the maintainer of a project I bumped into. I'm sure they would prefer a bug report in their official forge. But I don't really use their software except for this one time. I'm not willing to jump through the hoops to create an account in yet another SaaS just to file this one report. Just dropping them an email was a courtesy. But often they don't interpret it that way. I'm perfectly un-insulted if they just delete my note and never "fix" the issue because it didn't come through proper channels.
No attacks. No war. Just well wishes. But I might very likely avoid the product if I'm ever back in those woods. Not out of anger or retribution. Just because I'll remember that the product had at least one sharp edge for my use case and the maintainer was a bit overwhelmed by the weight of supporting my niche use case. That doesn't make the maintainer a bad person or even a bad maintainer.
If the maintainer is trying to write something RFC-compliant, and someone reports a violation of the RFC, it sure seems reasonable for the maintainer to want to track that.
If they don't want to, that's certainly their right, but it also tells us something about that project.
Someone reporting an RFC violation doesn't automatically mean there is actually an RFC violation. That's why they are asking for a minimal repro, not a dump of the reporter's stream of consciousness. If your teammate at work. came to you and dumped something like this on your desk, how would you react?
If they're doing this and bothering to interact with tickets at all, presumably they've willingly taken on a duty to the software's quality and all that that entails.
Maintaining an open-source software project is frequently a hobby that’s performed out of a labor of love. There’s no duty owed to anyone, nor should one be implied by past behavior. The open-source community is not a slave trade.
I don't see how that changes anything (and you didn't say "voluntarily"). Volunteering does not create a duty. One can volunteer to pick up litter and give up halfway through; the only consequence would be disappointment.
Volunteering to maintain a project literally does create entail accepting duties, that's what taking on the role of maintainer entails. They are of course free to give up that role at any time, but those duties exist while that role has been adopted.
Let's suppose for the sake of the argument that while you are a volunteer, you take on some duty. What is the nature of that duty? And how do we enforce the execution of said duty? What are the consequences of it not being performed?
You can't really say someone has a "duty" without also implying that they have a "responsibility," and thus liability if they fail to execute those duties properly. I don't see how this fits at all for a volunteer. Very few people are going to volunteer for no pay if they're taking on a risk of liability.
Maybe you mean a civic duty? That would make somewhat more sense, but the problem is that there’s no objective standard against which to test performance. It’s completely subjective and will be forever argued—much like this thread. :-)
The blog-poster wasn't happy with the issue being closed, so somehow I doubt that opening a new issue and referencing this one would've yielded a different result from what we got now.
I was reading through the complete issue thread and I have to say I probably would side with the wolfSSL maintainers in part but they could have handled it in a nicer way.
"Anthu" only responded with this after "feld" asked why the issue was closed by them, and only then the response you mentioned was written.
"Anthu" could have simply asked before closing the issue and the reporter would have been fine. Like, say "So, this issue meanwhile evolved into RFC compliance and got a bit off track in my opinion. Can you please open up a separate issue for this so we can get this fixed in a more focused manner? That would be very helpful for our workflow. If not, I would open up an issue and reference this one if that's okay with you."
My point is that feld felt a little ignored in their problem, and the support role could have handled it a little nicer. I get that maintainer time is limited, but I would probably recommend an issue template for these matters where there's checkboxes in them like "keep it short, keep it reproducible" and maybe a separate issue template and tag for RFC matters.
On the other hand, "feld"'s blog post reaction was also quite trigger happy and in part in bad faith. They could've communicated the same things in a "non rage mode" after things have calmed down a bit.
The OPs blog post also reeks of a similar style to the hit piece.
Given the large delay between the initial report and further responses by the user `feld`, I wonder if an OpenClaw agent was given free reign to try to clear up outstanding issues in some project, including handling the communication with the project maintainers?
Worse yet, despite publishing seventeen blog posts between filing the issue and finally responding to it, he has the gall to open with "Sorry I missed your replies (life gets busy)".
Nice. But 5 years seems unrealistic. Who stays on the same job using same processes 5 years these days? Even if the task might remain the same, input formats might change, requiring extra maintenance to the tool. Should recalculate that for 3 years before using it in my automation decisions.
It doesn't matter whether OSS is American (in whatever sense) -- anything that is America-specific (e.g. server addresses) can be patched for a localized European version. The different commercial model does matter: American law does not apply (Cloud Act, National Security Letters, ...)
Live tiles are nearly universally praised in retrospect, but it might be a case of hindsight bias [1]. The video [2] brings up some problems of the concept and why no other company copied the concept.
I think if Microsoft had made an easier bridge, faster from Win32 to things like Live Tiles (and the Charms, too) there would have been a lot more people praising the Live Tiles today (and maybe even the Charms). Live Tiles really made their case on Windows Phone 8 where nearly every app supported them (relatively well), that was the only "Notification Center" for missed notifications, and its glanceability became very obvious.
Charms are somewhat similar, too. On iPhone almost every app needs a Share button somewhere and almost every app still has it in a different place today. On Windows Phone 8 it was much more obvious why a dedicated OS-level Share button accessible just about anywhere in any app was pretty great. On Desktop it wasn't seen as helpful as almost no apps supported it (either as shareable things or as apps that could be shared to) because there was no easy Win32 bridge and Microsoft also didn't think to try to integrate with clipboard operations until too late in Windows 8.1 (and then never quite delivered it because most everyone had already written off the Charms by then), as what could have been a potentially easy path to use the existing Windows "share paradigm" to bootstrap.
(You can make cases for the other 4 Charms as well beyond the Share charm, but the Share charm is the most obvious where Windows Phone proved it was a good idea but the Desktop didn't have enough supporting apps to also prove it there.)
Are live tiles universally praised? I see them mentioned positively occasionally, but I suspect they are getting some benefit… like, they are the Windows 8 feature that isn’t immediately obnoxious. Windows 8’s UI just didn’t have any redeeming features, so the element that is merely bad gets brought up as a sort of “see I’m not a relentlessly negative hater, I’m objective” thing, I bet. Is there a name for this trope?
The way I see live tiles is that it was MS abandoning widgets that existed since vista (although they were removed later for security reasons) and coming up with a new thing to start all over with, and didn't backporting it so the only way you'd get them is on the (less popular) new version of the OS. Also they were tied into the start screen/menu, you couldn't drop on on your desktop.
The patent expired, but the minifigs is also a EU 3D trademark. This is not possible for the brick which (only) serves a technical function, namely to hold on each other. Trademarks do not expire while in use. Another example for a 3D trademark, also in this US, is the Coca Cola bottle.
In your list these are executables that change their own behaviour based on how they are called, whereas in the OP it's the OS changing code based on the name of an application.
In all but the last example these are the programs themselves having multiple names. There are literally multiple names on the file system that point to the same executable. So `ping`, `ping4`, and `ping6` are all the same program but it checks $0 to see which name it was invoked as in order to change it’s own behavior.
This is entirely different than when DirectX behaves differently for a program named foo.exe and another one called bar.exe.
AppArmor (and for that matter SELinux) are a different thing yet again. Here the goal is not to fix bugs or incompatibilities, but to add extra security. Similar perhaps, but not nearly as intrusive. Neither involve runtime patching, which is the most salient feature of this crash report.
From the GitHub issue it becomes clear that blocking happens by the EasyPrivacy blocklist. The blocked URL youtube.com/api/stats/atr is/can also be used for tracking users, this is why some are arguing that it legitimately on that blocklist.
The tracking not malicious. YouTube has a legitimate interest to verify views, e.g. to recommend popular videos to others. If a view counter was increased by just invoking an API, view counts could be manipulated easily. Also see the video [1] from ... 13 years ago ... so it might be slighly outdated. Just slightly.
The reason that they want a reboot is that they do not want to support a system using two versions of the same library at the same time, let's say ntdll. So they would have to close any program using that library before programs that use the new version can be started. That is equivalent to a reboot.
And I completely understand the reason. For a long time when Firefox would update on Linux, the browser windows still open were broken; it opened resources meant for the updated Firefox with the processes runnung the non-updated Firefox. The Chrome developers mentioned [2] that the "proper" solution would be to open every file at start and pass that file descriptor to the subprocesses so all of them are using the same version of the file. Needless to say, resource usage would go up.
[2]: https://neugierig.org/software/chromium/notes/2011/08/zygote...
reply