I agree this was a security concern and it was reported and addressed appropriately. With that said as things go this is pretty minor; perhaps a medium severity issue. Information disclosures like this may be leveraged by attackers with existing access to the lower environment, in conjunction with other issues, to escalate their privileges. By itself, or without the existing access, it is not usable.
More over, the issue wasn’t that AWS recommended or automatically setup the environment insecurely. Their documentation simply left the commonly known best practice of disallowing trusts from lower to prod environments implicit, rather than explicitly recommending users follow that best practice in using the solution.
I don’t think over-hyping smaller issues, handled appropriately, helps anyone.
Sounds like typical hyperbole. Worked at a place once where some “security researcher” trashed the product because they could do bad things on the appliance… if logged in as root.
The term was used by OpenBSD starting in 1999 for a group of devs getting together for a few days or a week and hacking on specific projects together in person. Not sure about earlier than that but that’s what I still associate the term back to.
The link goes to the press release. The actual advisory (https://www.cisa.gov/news-events/cybersecurity-advisories/aa...), linked from the press release, contains quite a bit more detail. They detail how they have observed Cisco routers being backdoored but don't limit the issue to that manufacturer.
>BlackTech actors bypass the router's built-in security features by first installing older legitimate firmware [T1601.002] that they then modify in memory to allow the installation of a modified, unsigned bootloader and modified, unsigned firmware [T1601.001].
I wonder how best to handle this kind of downgrade attack. Is reverting to an older firmware version an intended, supported feature? If so, I assume it's present in case the customer has a problem with the latest firmware and they want to revert. Maybe it makes sense to implement some restrictions on reversions -- e.g. they can only be done with physical access to the device, and it becomes impossible after an upgrade has been in place for 1 month say.
The focus on international subsidiaries was very interesting to me. I wonder what, specifically, it is about a subsidiary that makes it a softer target. Perhaps it's easier to gain physical access to a subsidiary office.
Just do what game consoles do: add hardware fuses that are expected to be blown depending on the version, and have the bootloader verify the number of fuses blown on boot. Then the device becomes a brick if it tries to boot an older firmware.
> We're not done with our request payload yet! We sent:
> Host: neverssl.com
> This is actually a requirement for HTTP/1.1, and was one of its big selling points compared to, uh...
> AhAH! Drew yourself into a corner didn't you.
> ...Gopher? I guess?
I feel like the author must know this.. HTTP/1.0 supported but didn't require the Host header and thus HTTP/1.1 allowed consistent name-based virtual hosting on web servers.
I did appreciate the simple natures of the early protocols, although it is hard to argue against the many improvements in newer protocols. It was so easy to use nc to test SMTP and HTTP in particular.
I did enjoy the article's notes on the protocols however the huge sections of code snippets lost my attention midway.
This feels akin to excessively auto-completing shells to me. It is pretty awesomely quick when you are starting, but it feels like it has to impact learning/muscle memory which gives you the accuracy and speed in the long-term. Maybe just the bias from learning without these things but I can't shake the feeling.
I don't know if this is everyone's reasons but for me it is:
- it does too much. doesn't follow the Unix norms of doing one thing and doing it well. things like DNS resolution, time sync, etc all exist as systemd components.
- complexity - similar to doing too much but also just that it is no longer easily inspected and understood imo.
- participates in the Linux cycle of reinventing the wheel, excessively. Linux distros felt like they were changing their service management commands every other release for a while.
- breaks compatibility with BSDs.
I'm sure it solves real problems for real people.. but I don't like it.
All of these complaints correspond to deliberate tradeoffs, and while they are correct, the negatives are entirely kffset by the positives.
Doing too much? Well it does a lot, but it does it in one place instead of having each daemon or script do it in their own peculiar, non standard way.
Complexity? The configuration files are deterministic, regular, well documented. Contrast with init scripts.
Reinventing the wheel? Well the old wheel was not quite fit for purpose any more. Gradual evolution only gets you so far, sometimes you do need a clean break even though most clean breaks end up being a failure.
Compatibility with BSD? You may care, I just don't. Not saying it's wrong to care about it, but it's irrelevant for most users.
The article you linked is from five years ago when the new badges were being initially tested. They have been rolled out for years with color photos. Both styles still work and if you don't go into one of a few big offices you might not have easy access to the newer style.
Not particularly cheap but at least no longer back-ordered. I've been pretty happy with it switching 2x4K monitors, keyboard and mouse between a Mac and a PC laptop. Both corporate laptops where custom software is more difficult.
You like this one? I have been looking for a while, but didn't find one I felt was trustworthy in that price range. For instances 11 reviews on this one.