It's not abhorrent. It's quite common and the correct thing to do here.
Them not disclosing doesn't make you safer. The people that want to abuse this could be actively exploiting it shortly after the commit went live. Waiting more time before the blog / marketing release is NOT the help you think it will be.
This is a very, very, old debate in the security community, just read the rest of this thread and you'll see plenty of explanation as to why.
They did not, in fact, botch anything. They notified the responsible party and followed a practice that is pretty much the accepted norm (and for good reason).
How recursive should their notifications be? Just the tip three distros? The top dozen? Every embedded Linux router company? How about every hosting provider?
They did what they're supposed to without being paid for it. The only other good source of funding for security research besides marketing budgets for security companies will NOT result in a disclosure timeline you'd be happier with. ;-)
External security research happens for one of only a few reasons typically:
1) hobbyists who are learning or just like to do it for fun
2) bug bounties (good luck with those in most open source)
3) marketing for security companies
4) non-public research going to CNO/CNE
If you want to kill 3, the output of 1 will not come close to 4 and the public is NOT better off with fewer public bugs.
You could try to make that case either way, but as has been pointed out by others all over this thread, the system we've landed on (90/+30) is industry standard after over two and a half decades of experimentation.
Anything else inevitably has worse for the public good.
Having spent that entire time and then some on both offensive and defensive teams, I assure you longer delays after notification do NOT decrease the overall risk to the public.
There's a reason we've landed where we have as a security community.
We have actually been more inspired by Jetbrains lately than VS Code. Take that for what you will.
We do try to pick simple sane defaults while still allowing enough customization to adapt to different workflows.
Actually working on a startup wizard for first time users if they want to more closely replicate the feel of other RE tools since muscle memory is hard to break.
IDEs have changed a lot in the last 50 years. Just like we shouldn't advocate for hand writing assembly for all code, we shouldn't be stuck using CLI tooling the same way.
I share your apprehension regarding the current AI landscape changing so quickly it causes whiplash but I don't think a mindset of "it's been fine for 50 years" is going to survive the pace of development possible by better LLM integration.
The reason that tools have not changed that much is that our needs haven't changed that much either. Even something like `find` or `ffmpeg`, while complex, are not that complicated to use. They just require you to have a clear idea of what you want. And the latter is why most people advocating for LLMs want to avoid.
IDEs have not changed that much. They've always been an editor superchaged with tools that will all share the same context of a "project". And for development, it's always been about navigation (search and goto), compile/build, and run.
I don't know why people think this, you're not the first person I've heard it from either.
First, I literally saw them do shots during a talk yesterday for some first-time presenters. Secondly that WASN'T the "old defcon" either! Drinking is a relatively new tradition in the history of the con. I've spoken twice. Once at DC 17 (no shot offered) and once at DC 23 (shots were offered). There's video proof:
Can I just say, thanks to the person who posted this for waiting until this week to do so. (Side note: I suspect it was due to the recent coverage from C++ Weekly which is a great resource: https://www.youtube.com/watch?v=h3F0Fw0R7ME)
As recently as last week we had some horrible performance problems but it looks like the queue (https://dogbolt.org/queue) is mostly still fine! Other than the long pole of a few of the decompilers being backed up, things are humming along quite smoothly! Josh + Glenn have done some great work on it! (https://github.com/decompiler-explorer/decompiler-explorer/c...)
Binary Ninja likewise is empty and keeps up just fine as well. It's not a coincidence that the two commercial products that are funding it are both confident enough to put their stuff online like this.
Them not disclosing doesn't make you safer. The people that want to abuse this could be actively exploiting it shortly after the commit went live. Waiting more time before the blog / marketing release is NOT the help you think it will be.
This is a very, very, old debate in the security community, just read the rest of this thread and you'll see plenty of explanation as to why.
reply