2. The bug is not in Devuan, it's in something called refractainstaller, which is used for Devuan live-ISOs. If you just install Devuan that doesn't happen.
3. With a refractainstaller live-ISO, and if you chose to not define a root user, then this bug manifests.
The bug seems to have lingered for so long because despite being rather obvious (i.e. you can just become root) because nobody tried to secure a live-ISO-based system, which is something you typically use as a "rescue disk" or to diagnose hardware.
So - is this a screw-up? Yes. Does it reflect significantly on Devuan as a project? Not really.
To draw a parallel - the fact that systemd has had bugs which other init systems didn't, or had gotten over decades before, does not mean it's an undesirable project. My (and many people)'s problems with systemd regard its fundamental design philosophy, as well as its governance/behavior as a software project.
>2. The bug is not in Devuan, it's in something called refractainstaller, which is used for Devuan live-ISOs. If you just install Devuan that doesn't happen.
From the link:
>>When you download and install the desktop-live Devuan image, you will be prompted to create a user account at the end of the process.
... ie it seems to be talking about the normal process for installing a Linux distro - you make a live CD, boot it, and run the installer.
Nothing. Systemd is a suite of software that handle a lot of the low-level operations on Linux (In particular, the service manager, some network configuration, along with some other stuff). Historically, those operations were handled by different services (like SysvInit).
A lot of people are mad about it for a lot of reasons, but if you're not a system administrator, it's probably better to stick with systemd, since it's what most of the linux community has standardized on, and thus it's much easier to find information online about using those systems.
Amen to this. I understand some of the complaints with it getting into name resolution and other things, but having had to decipher vendors init scripts it's a huge improvement for managing services.
When you add in things like Podman and it's use of systemd it's overall not a bad thing at all.
> What's the impact of having systemd (or not) for the everyday layman like me that just uses Visual Studio Code to build flutter apps ?
Compared to initrd, after Debian upgrading to systemd, the system boots slightly faster and shuts down slightly slower (or same speed, depends really what you run)
By "unencumbered by systemd" they mostly mean "works worse". We've removed tens of thousands lines of fixes for "easy", "simple" sysv scripts when we upgraded our servers to version that runs systemd.
most of the general purpose distributions use systemd. Almost every general purpose distro you've ever heard of. They all adopted it by choice, because they thought it was a better option than the alternatives for one reason or the other.
It's an idealogical flamewar, that in most cases is without impact.
The binary format of the log file still seems to me to be a bad idea - it introduces corruptibility and complexity to a vital service that should be simple and uncorruptible.
If a failing system starts to do weird things, plain text append only logging is preferable.
It’s a problem because people do not wish to learn new paradigms. Put me in that camp - took months to debug a 90 second hang and work around it. Of course, fast boot owes to its parallelization.
TPM = Trusted Platform Module. Trusted is an adjective modifying platform. The business case is that software should not run on untrusted platforms because hardware can always attack software. So, the promise is that TPM will allow software to guarantee* to users that the hardware is not malicious.
From my perspective TPM is mostly about compliance with security directives. Actual security engineers would realize the TPM's security provided is about as far as you can throw it. You have no idea who designed the thing, who manufactured it, or who swapped it out while it was in shipping to you.
You have no idea who designed the thing, who manufactured it, or who swapped it out while it was in shipping to you
Wait, what? In most cases you 100% know who designed and manufactured it. Regarding "swapping out" a TPM, how do you do that for fTPMs or TPMs that are on the same die as the CPU? Come up with a perfect replica AMD CPU with a bugged TPM? Desolder the original CPU and put the replica in?
I think if you went back in time, you'd find people saying that openssl was developed by a consortium of open source contributors.
It's painfully obvious nowadays that openssl was written by the NSA (or equivalent state level entity) via intermediaries deliberately adding subtle but significant vulnerabilities.
Let me propose this metaphor: you buy a front door to your house. For some reason I (the door vendor) include a complete lockset. I tell you that the lockset is totally secure. I offer you volumes of academic research and attestations that this is true. But there is no practical means by which you can establish the security of that lockset.
Do you believe that no one else can open your front door?
> It's painfully obvious nowadays that openssl was written by the NSA
The problem is that your lack of context and perspective on this fairly simple, easily-falsified theory calls all of your opinions into question.
A more nuanced conspiracy theorist would say "if you look at PR's to openssl that contributed later-discovered security issues, 70% were from first-time contributors who never went on to submit any other PR's". And I'd be like "wow, that's suggestive of a coordinated action", and we could dig into it.
But "the NSA wrote openssl" is as factually, demonstrably wrong as saying "the NSA builds every door lock that's for sale at Home Depot". It's too big of a conspiracy, to inefficient for the supposed state goals, and too easy to falsify by just looking at a couple of examples.
The only one that would fit my mainboard I bought on Amazon, and I only got it to upgrade Windows. The indication on it is "made in China". As a consumer I also have no idea how these suspicious chips that I'm required to plug into my mainboard are supposedly trusted, and by whom, and what for.
"Plug this chip that says made in China into your mainboard, for security reasons, to continue." is not really emitting trust or confidence in any way.
> "Plug this chip that says made in China into your mainboard, for security reasons, to continue." is not really emitting trust or confidence in any way. "
I'm sure 30 other chips made in china on same board are entirely fine
> You have no idea who designed the thing, who manufactured it, or who swapped it out while it was in shipping to you.
All wrong, as others have pointed out. As for the last of the above, if the OEM includes (as they should) platform certificates for the TPM, then the TPM cannot have been swapped out while in transit w/o the OEM helping the attacker. For example, Dell includes platform certificates binding the TPM.
Trusted system is one whose failure would break a security policy. It's not about it being secure, it's about it breaking other things when it's broken.
> Honestly TPM is probably creating more bad than good at this point.
The problem isn't the TPM. The problem is the CPU (SP) being vulnerable to voltage fault injection attacks. You could be using no TPM and still have all your secrets leak if the host is fully compromised.
Microsoft is edging closer and closer to dropping support for Windows 10(even a computer I built in 2017 that's still running perfectly fine can't upgrade). But for many users, changing to another OS besides Windows is tantamount to not functioning, so planned obsolescence continues apace.
They chose an arbitrary cut-off date for hardware support for their new OS. They decided not to support old stuff anymore and they had to pick a date/technology platform. It was always going to be arbitrary. In my opinion they should've picked a clearer distinction (i.e. require a certain level of AVX support so all binaries can be built with AVX optimizations enabled) but I can see why they chose to do this. After all, they're going to have to support the OS for ten years, that four year old CPU is fourteen years old by the time Windows 11 goes out of support.
That link does not say it will work fine. It says it is not recommended or supported and if it blows up it is your problem. Additionally it says you might not get any updates. Certainly, it sounds like it might just be some ass covering on their part but they were also testing a nag watermark for unsupported installs like this so maybe not. Either way it doesn't sound like a really solid path forward.
It's certainly not a path forward that Microsoft will recommend. It'll work fine, though.
If not, there are other operating systems that do work. Microsoft isn't the exclusive owner of the PC space, that's one of the major points of all of the antitrust fines and lawsuits.
He said it'll work fine, not that Microsoft is promoting it as such.
There're no fundamental changes to what's actually required, the limitations are arbitrary and set by Microsoft (and they provide a means to get around them).
It can work yes but AMD chips didn't get GMET until Zen2 so if you leave virtualization based protection on you might see a performance hit.
From Microsoft's website:
>Memory integrity works better with Intel Kabylake and higher processors with Mode-Based Execution Control, and AMD Zen 2 and higher processors with Guest Mode Execute Trap capabilities. Older processors rely on an emulation of these features, called Restricted User Mode, and will have a bigger impact on performance.
Which is amazing since windows 11 is full of mentions of green energy, lowering energy consumption and asking you to lower screen brightness to lower carbon emissions. Total green washing when you consider the gigantic amounts of ewaste that arbitrary cut off date will lead to. It's just funny tbh, like I get it's most likely very different teams working on those things but it's tone deaf at best.
But hey at least the OS has a revolutionary new green technology called... Battery saving mode!
The worst part is that it is clearly cargo culting. What consumer suddenly buys kool aid they never bought before because it says "paper straw now!" on the packaging?
True. It is not as though I've ever cared about what the vendor supports at home before.
Most of my machines were on a Linux distro before that decision, and Debian 12 is providing a good enough to me experience on the desktop, even gaming.
I'll probably just stay there indefinitely. Is that an upgrade? Subjective. I'm happier here.
IIRC it has more to do with that series of chip not having GMET (AMD Guest-Mode Execute Trap for NPT) which is used in Windows 10/11s virtualization based protection. Microsoft requires this option for all new PCs from their partners but you can install windows 11 and run it fine without this CPU feature (there is a performance hit if you leave virtualization based protection on though since it has to be done in software).
Don't reward them for that. The only solution is to move to another OS, that's the only thing they will ultimately understand - no matter how inconvenient it might be.
That's not an option if you use your machine for work and your Dev tools only work on windows. PS5/Xbox toolchains only work on windows, as a gamedev I don't really have a choice.
I think they hate unix, not windows, because they dragged all legacy nonsense unix had for hardware reasons straight into modern times so everyone can laugh on it.
Sounds like the last time you used Linux was 20 years ago. Linux been as easy as Windows to install and use for at least 10 years now. My mothers laptop runs Linux Mint and she doesn't know its not Windows even though the colors are all wrong. Why? It's all about the DE - if it looks like windows, walks like windows and quacks like windows, its windows. If she can click on the menu and find the internet then it's a win. Installation was was as easy as windows and everything worked out of the box.
Hell, I installed the Chicago95 XFCE theme on my main system to see how well it emulated the look and feel of Windows 95 and wound up liking it. Why? Because even though it looks dated, the icons were immediately familiar and I felt navigation instantly become easier.
Do not underestimate the power of familiarity. Many of us grew up on DOS/Windows playing games and typing school work up in Word so moving away from those familiar waters is HARD. It's like being an immigrant moving to a new country - you have to put in extra effort to learn to adjust to culture and language. Some can, some cant. YMMV.
I've always felt that $BIG_BRAND_DISTRO+KDE got pretty close. It's still Linux, so not quite the same, but as far as look and feel, it's pretty close I think.
Use any Unbutu distribution with MATE or Cinnamon. Like shit dawg, I got my grandma using Linux Mint with MATE.
Changed the background to match the same one on her old laptop and added some desktop icons, and boom 99% of her experience was the same. Had to help her a little with the LibreOffice -- that's a little different -- but otherwise functionally similar to Windows 7 / Win10.
I'm using linux as a daily driver and at this point there isn't anything I can't do on Windows. The hold out for a while was games, but Proton w/ Steam works well and I can play big titles like Cyberpunk 2077.
The problem with the cloud is how often I've run into blanket linux/bsd support bans. Especially when it comes to professional certifications done online. I had one website refusing to work on my FreeBSD or Debian installs. It would just get to a certain point and not let me proceed and multiple buttons refusing to work properly.
Got on the phone with support and they were dumbfounded. Got the idea to just spoof my user agent as a windows box on edge and it worked perfectly afterwards.
Even if not done on purpose there is a lot of crufty shit online that breaks in unsuspecting ways when I'm on a linux/BSD box. Especially if interfacing with the government websites and webapps. Our state fire code website looks straight out of 2002 and has multiple warnings about making sure to use IE6... in 2023.
Maybe its just my use case (fire industry / local government), but it helps to have a mac or windows machine lying around as backup.
Windows 10 was released in 2015, so they will have offered 10 years of support, which was the standard policy in place ever since Windows Vista (Windows Vista, 7, 8/8.1, 10).
Apple does not support any macOS version for that long, and is unlikely to support a current version of macOS on any given device for that long.
You say “support” like the moment they stop it, OS cease to work. OSX Leopard is usable for most of tasks. Lot of people still running Windows 7 and Windows XP without any need for “support”. Windows 10 will be even better without constant “support” reboots. Can’t say this for Windows 11, as it so tightly stuffed with spyware and online integrations, it might just not boot if MS plug some server switch.
Windows 7 is only starting to get obsoleted now : while Microsoft is still updating it with critical security fixes, Qt and Chromium dropping it is pretty much the end for non-legacy usage. (Good bye Windows I guess...)
Apple does not provide security updates for Catalina, which was released 3.5 years ago. People would be crazy to run unpatched OSes for any use case involving the internet or wifi.
That does not make these PCs obsolescent in any way. People still use Windows 7, and even Windows XP. Especially in the many contexts where "security" is not important at all.
I've had laptops where secure boot was permanently on unless you used legacy bios, and I've fixed several, both PCs and laptops, where there was no option to disable secure boot or use legacy bios. This was some years ago, and the situation has improved a lot since then.
ARM that are microsoft certified will specifically not give you that option.
You are probably largely using x86_64 which come with the option to do so, but there has been a lot of push around moving to things like ARM for energy efficiency reasons.
An AMD Lenovo Laptop supposedly gets 16 hours of battery life and again, this is subject to what you're running, but it runs x86 stuff natively instead of having to translate it. Isnt this 'good enough' for most people? I know we have people wanting week long laptop batteries, but over 12 solid, real world hours should be good enough for the majority of users, I'd think.
Uhh what? How does that work? Where do they request it? The TPM manufacturer? Microsoft?
TPMs aren't very secure and as a discrete component their connection to the CPU can be intercepted (unlike fTPM or apple's integrated solutions).. There's a big difference between having a deliberate backdoor and just a vulnerable design that can be exploited.
I haven't seen them accused of being backdoored. Intel's ME (and AMD's equivalent) perhaps but that's not the TPM.
TPMs can also be used to hide DRM keys from the user and I'm also opposed to that, but generally that stuff is hidden in other hardware. Like Google's wildvine stuff in mobile CPUs.
Yes, if you want to recover keys from a dTPM you have two options:
- decap it, scan it with an electron scanning microscope, reverse engineer it (or have already done so), and read the seeds and all NVRAM on the chip
- force the manufacturer to record the seeds even though they have processes to never do so, then force the manufacturer to reveal the seeds a dTPM shipped with given an EKpub for it
A few nation states could probably pull off the latter, but probably very few. And I suspect they haven't bothered and won't until TPM usage finally gets in the way. This is pure speculation, and they may well have forced all the manufacturers already for all any one of us knows.
More nation states could pull of the former. But again, they might not bother until TPM usage finally gets in the way.
As long as BMCs and BIOSes continue to use non-encrypted sessions to talk to dTPMs there is no need to do any of this when the attacker has physical access to the motherboard.
They have some weaknesses. A dTPM uses an unencrypted protocol to communicate with the CPU (simple i2c or SPI) and it's pretty easy to sniff it if you manage to get legit access. But you do need a legit user to log in to the machine once. This is a bit of an achilles heel.
In this sense an integrated solution is better because there is no simple bus to sniff, but it does have to be properly implemented of course. Which seems to be not the case here.
By the way a dTPM should have a real entropy RNG so technically it shouldn't have any (usable) seed. It's basically a smartcard soldered onto the mainboard. Of course smartcards can also have key generation flaws like the Infineon flaw a while back. https://www.schneier.com/blog/archives/2017/10/security_flaw...
> A dTPM uses an unencrypted protocol to communicate with the CPU
While that is strictly speaking true, the TPM command set allows you to set up an encrypted session to the TPM using an ECDH or RSA key for key exchange that authenticates the TPM.
The problem is that the BMCs and BIOSes out there don't record a public key for a primary key on the TPM and then don't bother using encrypted sessions (not even opportunistically getting that public key from the TPM, which would defeat passive attacks).
Thanks, I didn't know that, I thought indeed that it was simply not possible with TPM 2.0.
I do think it's time for a TPM 3.0 though. What apple does with their T2 security chip, and later with the M1/M2, is having the secure element not only handle the key material but the actual encryption as well. They have hardware acceleration that can handle encryption at full disk speeds. This is still a much better option than a TPM especially with symmetric encryption where the key would inevitably end up in the main CPU. In Apple's scenario this no longer happens.
- encrypt all command and response parameters instead of up to just one
- add a version of TPM2_Quote() that encrypts and signs so one can have ciphertext that one can demonstrate were made by a TPM encrypting to a restricted, shielded key
- add a small secure enclave facility
- add more EC algorithms, EdDSA, etc.
- add more cipher modes for AES
- increase RAM and NVRAM requirements
All of this can be done incrementally in 2.x, so calling it 3.0 would be just marketing (perhaps pretty good marketing).
> By the way a dTPM should have a real entropy RNG so technically it shouldn't have any (usable) seed. It's basically a smartcard soldered onto the mainboard. Of course smartcards can also have key generation flaws like the Infineon flaw a while back. https://www.schneier.com/blog/archives/2017/10/security_flaw...
The seeds are an essential part of the TPM story as for generation (derivation) of primary keys, and being able to "take ownership" of a TPM by changing those seeds.
The seeds are not an essential part of the TPM story for its RNG. A TPM absolutely can and should have a solid HW RNG. Though, were I designing a TPM, I'd combine the output of a HW RNG w/ a PRNG seeded internally.
But the seed itself should still be fully random though? And generated on-device during initialisation. Derived keys are a thing of course, and I understand the benefit thereof.
But a manufacturer-installed seed that they have control over sounds like a very bad idea.
> TPMs aren't very secure and as a discrete component their connection to the CPU can be intercepted (unlike fTPM or apple's integrated solutions)..
The problem here is that while it is possible for a BMC / BIOS to know a dTPM's EKpub and use it to establish encrypted (and authenticated) sessions to the dTPM, most BMCs/BIOSes don't. This is a limitation on the host side, not the TPM side. I get that in total the vulnerability exists, but it doesn't have to, and TPM has a perfectly good solution for it. Take it up with the OEMs!
Every time I think about the millions of computers that will be declared worthless this year, it makes me a little bit more angrier.