Hacker Newsnew | past | comments | ask | show | jobs | submit | Borealid's commentslogin

With PIV, the private keys are stored inside the smartcard (a Yubikey is just one type of smartcard) and don't leave it. They're used for encryption/decryption by the host.

Yes, it's generally sound, and is the primary means of authentication and encryption used by the US military for classified systems.


It does not. The files are copied to the recovery image, not the machine's encrypted drives.

If Microsoft wanted a backdoor, there is no need to hide it in the official Windows Recovery Environment image.

Just sign an alternate version of the recovery environment that doesn't bother displaying a login screen. Done - you can access any TPM-only Bitlocker setup freely. This is actually LESS risky than keeping the exploit in the publicly-available version of WinRE, because you don't have the risk of pesky security researchers finding your backdoor.

Hanlon's Razor and Occam's Razor both say this is probably a bug that lets you use some kind of early-boot filesystem-corruption-fixing code on the recovery image to break the login screen and leave the disk unlocked by accident. It deletes itself because it's, well, intended to be a filesystem fix log, and the log gets deleted when it's done being replayed so it doesn't happen a second time!


There are two ways to "use a PIN".

Since there's a ton of misunderstanding in this thread, I'm going to go into how disk encryption works conceptually.

First, there's a symmetric key to encrypt blocks on the disk. Since you want to be able to change your unlocking password/mechanism without re-encrypting everything on the disk, this has nothing to do with unlocking the disk. This is what you want to get BY unlocking the disk. Let's call this the "data encryption key".

Then, there's something you use to encrypt the data encryption key. Let's call this the "key encryption key" (abbreviated KEK from here on in).

When you use a TPM, the KEK is stored inside the TPM. When you use a TPM PIN, the TPM refuses to release the KEK for use by the OS unless that PIN is provided.

You could say "why not make the KEK be a hash-mixed combination of a PIN and something inside the TPM?". One could do that! But that's not how Bitlocker works. There is a reason it doesn't work that way: the TPM is supposed to let company admins in charge of the device access it even if the original PIN is forgotten, by using other policies letting them get at the KEK. I personally set my own devices up such that the passphrase IS part of the KEK itself.

Interestingly, LUKS does not have a composite key mode natively that lets you combine a password with TPM material, but there are some good reasons not to use JUST a password:

1. The strength of your disk encryption reduces to the strength of the password, where a TPM can have a 256-bit truly random key

2. If someone keylogs the password, or tricks you into disclosing it, they can later decrypt your drive from anywhere, where a TPM binds the attack to those with posession of the TPM

3. There is no protection against brute force attacks (rate limiting), where a TPM does - or tries to - impose a rate limit

Now, let's go on to what YellowKey attacks.

A TPM can have inside itself "registers", called PCRs. These PCRs can be updated but not reset - think of it like you can add numbers to them but not subtract, and they only go back to zero when you reboot.

Using a passwordless encrypted boot, the TPM is configured to only release the key when the PCRs are in the exact correct state. As the OS boots it adds numbers to those PCRs. If you boot "the wrong" software, the numbers in those registers won't match the expectations, and you cannot unlock the disk.

Speculation on my part: the reason there's an exploit here is that the Windows Recovery Environment apparently can match the PCR values for the booted OS, causing the TPM to release the key, but WinRE doesn't require you to get your password right before it gives you access to the data. So far as I know, protecting the TPM key with a PIN would mitigate this issue, but it's still bad.

Or maybe the exploit actually does something inside the TPM itself, causing it to unconditionally release the key even when protected by a PIN: that would be even worse, but **NOT*** a problem with Windows. That would be a problem with the TPM.


> Since there's a ton of misunderstanding in this thread

True. It's unfortunate, amd a lot of false information being spread there.

> the KEK is stored inside the TPM

That's not how it works. The KEK is not stored inside the TPM, but encrypted/decrypted by the TPM.

> You could say "why not make the KEK be a hash-mixed combination of a PIN and something inside the TPM?".

Bitlocker does that. Cryptenroll doesn't (https://github.com/systemd/systemd/pull/27502), which is bad but has not been fixed.

TPMs are a nice idea, but there are a few problems:

- The KEK should also depend on the PIN. Cryptpentroll does not do this at all and Bitlocker limits the PIN to 20 characters.

- There are various manufacturers of TPMs and all of them have different implementations. Some of them had been broken in the past, which is why it's important to make secrets PIN-dependent.

I seriously doubt the author found a way to bypass PIN protected setups in general. This should only be possible in combination with a vendor/model specific vulnerability. Maybe an fTPM?

As of this moment, I would rather look at it as a convenience feature. A high entropy password + a proper KDF (not possible on Windows) like scrypt or argon2 is the better choice. Encryption should be handled by SoC engines like on Macs, iPhones or some Android phones to mitigate other attacks and preserve performance anyway. Panther Lake CPUs with vPro support do on Windows.


> That's not how it works. The KEK is not stored inside the TPM, but encrypted/decrypted by the TPM.

No, the KEK is stored inside the TPM, and the DEK is decrypted by it. If you have three layers (two KEKs, one encrypted with the other) then sure. But at the end of the day one private key is actually stored inside the TPM, taking up as much space as the length of the key.

To be specific, Bitlocker uses a stored key, not a derived key.

> Bitlocker does that. Cryptenroll doesn't (https://github.com/systemd/systemd/pull/27502), which is bad but has not been fixed.

I don't think Bitlocker does that, because it's possible to set up admin PIN recovery techniques. That would be impossible if the PIN were part of the KEK.

> - The KEK should also depend on the PIN. Cryptpentroll does not do this at all and Bitlocker limits the PIN to 20 characters.

I agree.

> There are various manufacturers of TPMs and all of them have different implementations. Some of them had been broken in the past, which is why it's important to make secrets PIN-dependent.

I agree. Perhaps consider using a FIDO2 token (supported by cryptsetup) instead of a TPM. There are open-source implementations of FIDO2 and open-hardware ones too. There are even open-source implementations of FIDO2 where the key is in fact derived from the user's PIN (plus a stored secret). If you did that, you get the proper security properties even without cryptsetup mixing the two methods.

> I seriously doubt the author found a way to bypass PIN protected setups in general.

I agree - I think the author probably just found a flaw in the windows recovery environment and is talking up how a PIN only helps you if the attacker does not know your PIN (in other words, acting as if the PIN provides no threat resistance when really it provides an additional layer).

> As of this moment, I would rather look at it as a convenience feature. A high entropy password + a proper KDF (not possible on Windows) like scrypt or argon2 is the better choice. Encryption should be handled by SoC engines like on Macs, iPhones or some Android phones to mitigate other attacks and preserve performance anyway. Panther Lake CPUs with vPro support do on Windows.

I think the best you can do right now is to layer a password with a hardware device. I don't think saying that the hardware devices are flawed means they are not useful as PART OF the security setup. It would certainly be nice if the software did this automatically/easily and it's unclear to me why it does not.


Thanks, I was familiar with encryption but not with bitlocker.

So this only affects a particular mode of bitlocker in which the drive is automatically decrypted on boot without the user providing any secret. Meaning the key is basically stored in plaintext on-device, albeit in a convoluted way.

To me it seems intuitive that such a mode isn't secure. It's a bit like protecting your door with an unpickable unbreakable lock, but then putting the key in a lockbox on the wall with a flimsy padlock that can be raked or cut off in seconds.

It seems roughly equivalent to not encrypting the drive at all so it doesn't seem surprising that there's a way to bypass it.


The point is that the lockbox is the TPM that, on paper, is supposed to be unbreakable. In practice, sometimes it can still be broken with physical attacks (like side channel analysis or fault injection, or even simply snooping the communication between the TPM and the rest of the system with a logic level analyzer), despite that it should be designed to be hard to break even with such attacks.

If the TPM is properly designed and manufactured, and the software relying on it is again properly designed and implemented, then it would be perfectly secure. The problem is more the difference between the theory and the real world; the flimsy lockbox analogy doesn't hold.


I don't think any of the attacks being discussed are actually attacks on the TPM's own threat model.

I think they're attacks on Windows' measured boot approach.


the vast majority of TPMs today live inside the CPU (fTPM). you can't physically attack them

I gave three ways in which encrypting a disk using a TPM provides advantages over encrypting the disk using a secret password.

Encrypting the disk using a secret password provides advantages over encrypting the disk using a public password.

Encrypting the disk using a public password again provides advantages over not encrypting the disk (such as being able to securely "delete" data by removing the data encryption key).

I agree with your core point that attempting to use measured boot and secure boot to control whether the disk can be decrypted is full of holes. But if you want the computer to have an encrypted drive and to be able to boot up without a network or human intervention, what are your options really?


If we assume malicious software was already present from the beginning, that opens up some possibilities where the TPM is bypassed.

For example, storing a second, hidden copy of the master data encryption key, in an obfuscated form on a region of the disk that is unused or somehow reserved for the OS.


That does not match up with the way this exploit works.

An un-exploited system is booted with a modified version of the Windows Recovery Environment.

Like I said, I think the not-well-described problem here is that (effectively) the lock screen on Windows RE is not secure, so you can have a PCR match in the TPM, but then access the disk as an administrator without typing the admin's user account password. That's not a vulnerability of the TPM itself, and it's not some kind of persistent exploit. It's a flaw in the Windows RE.

I'll also point out it grants access to do only what Microsoft themselves could do at any point. Anyone who has the ability to make a validly-signed copy of Windows could break into a TPM-locked Bitlocker setup exactly this way. People who use Bitlocker without a PIN are implicitly accepting that risk.


Their strategy WAS GamePass - get a bunch of users accumulating huge collections of inexpensive-but-high-value games, paid for via a subscription (rented), that are only playable on Windows (enforced via Microsoft's own software and an account login). Use loss aversion to prevent the users from letting their subscriptions lapse.

They made a tactical mistake by trying to directly monetize the GamePass subscription instead of having it remain a purposefully-underpriced vendor lock-in mechanism. Whoops.


Believe it or not, email service providers actually exist.

Rollernet.us is a good one. They have excellent deliverability, reasonable prices, and everything you could want related to email.

They have a few minor other services, like DNS management, but they are not a cloud compute provider.

Another option is to use a cloud compute provider like AWS. You don't need to run the VM yourself to use SES for email messages. The hard part is the webmail access: you have to choose between a poor interface (an S3 bucket) or running a managed VM to host something like Roundcube.


SSH has *ANOTHER* built-in solution, in the form of the SSHFP DNS record.

If the DNS record for the host has an SSHFP (SSH FingerPrint) record, SSH will compare it to the retrieved public key(s) and refuse the connection if there is a mismatch. It can be configured to require DNSSec for this, or to only reject if it gets a secure rejection (to prevent DoS).

It works perfectly, has no notable down sides (just add a DNS record when you generate the host's SSH key), and has been around for many years.


It is very insecure unless you use dnssec, isn’t it?

Just means an attacker also needs to mitm DNS if you MITM the host. Not trivial, but depending on setup might not be harder.


I recommend reading the description of the option `VerifyHostKeyDns` in the `ssh_config` man page.

If set to `yes`, you get automatic trust-on-first-use (no user prompt) if you use DNSSec, and you get the current asking-the-user behavior if your DNSSec is broken or you are under attack.

Obviously it's more secure if you use DNSSec, because that way you can reflexively deny any request to manually verify a host key, but it provides value regardless.


Correct. Very insecure unless your client app goes out of its way to perform DnSSEC.

But wait, there's more: SSH config, resolv.conf, DNS RR setup.

A lomg checklist for successful SSHFP deployment:

https://egbert.net/blog/articles/dns-rr-sshfp.html


That site doesn't mention that when DNSSec is absent, the behaviour of SSH is identical to what happens if you hadn't used the SSHFP record at all, except that for unsophisticated attackers it also displays "no matching host key found in DNS".

So even without DNSSec using the SSHFP records is an improvement over not using them because some of the time it tells you for certain you're being interfered with.

There is no situation in which an insecure DNS response is auto-trusted by the SSH client.


Is there something like this for Wireguard?

Wireguard has no key distribution mechanism.

You can use software like Headscale/Tailscale/Netbird on top.


There is already plenty of open hardware, it's just not this-year's-top-performance.

In the category of ~1-3 years' performance lag you get Rockchip and friends, which are closed hardware that allows open computation. See computers made by the company MNT as an example.

In the category of ~5 years' performance lag you get "soft" cores, where you buy an FPGA (dynamically reprogrammable hardware) and make it run a CPU you design yourself. If you want to, for example, make your CPU have more cache and fewer ALUs, you can do that by tweaking some files and reprogramming the FPGA. This has a cost in terms of power efficiency and runtime speed, but you can absolutely run a full Linux desktop experience on an FPGA today, and the hardware has no way to try to prevent you from running any software.

You can solve the problem of all the cellular basebands being closed source with either software-defined-radio or using a closed USB/PCIe cellular modem connected to an open processor.


> If you make an LLM more safe, you are going to shift the weight for defensive actions as well. > > There’s no physical way to assign weights to have one and not the other.

Do you think a human is capable of providing assistance with defense but not offense, over a textual communication channel with another human?

If no, how does a cybersec firm train its employees?

If yes, how can you make the bold claim that it's possible for a human to differentiate between the two cases using incoming text as their basis for judgement, but IMpossible for an LLM to be configured to do the same? Note that if some hypothetical completely-determinstic LLM that always rejects "attack" requests and accepts "defense" ones can exist, the claim it's impossible is false. Providing nondeterministic output for a given input is not a hard requirement for language models.


> Do you think a human is capable of providing assistance with defense but not offense, over a textual communication channel with another human? > If no, how does a cybersec firm train its employees?

In general, no, humans can’t be sure they are only helping with defensive and not offensive work unless they have more context. IRL, a security engineer would know who they’re working for. If they’re advising Apple, then they’d feel pretty confident that Apple is not turning around and hacking people.


If the task is ill-defined, then it's a bit unfair to make it sound like the problem is that an LLM can't be configured to do something, if a human would have an equally hard time with the same task. The statement "it's impossible to configure the weights to..." should really be something more broad like "it's impossible to...".

I have no comment about whether it's impossible to determine the intentions of a person asking for assistance through a textual conversation with that person.


> IMpossible for an LLM to be configured to do the same?

Because that’s what I am seeing emerge from the various efforts to build LLM safety tools.

> Do you think a human is capable of providing assistance with defense but not offense, over a textual communication channel with another human?

LLM != human? They don’t even use the same reasoning process.


> Because that’s what I am seeing emerge from the various efforts to build LLM safety tools.

Something having not been obtained so far is not a logical argument it is impossible to obtain that thing.

> LLM != human? They don’t even use the same reasoning process.

There are a finite number of possible input strings of a given length. For any set of input strings, it is possible to build a deterministic mapping that produces "correct" answers, where those correct answers exist. Ergo anything a human can do correctly with a certain set of text inputs, it is possible to build an LLM that performs equally well. You can think of this as hardcoding the right answers into the model. The model itself can get very large, but it is always possible (not necessarily feasible).

It's only impossible for an LLM to do something right if we cannot decide what it means for the answer to BE right in a stable way, or if it requires an unbounded amount of input. No real-world tasks require an unbounded input.


I think there's a line between retaliating against someone, and refusing to help them in the future.

I do not believe that refusing to do business with an individual, where your business provides a non-life-critical service, is retaliation. A water company refusing to provide water to your home would be problematic. A luxury handbag store refusing to allow you to purchase more luxury handbags would not.

Image as a hypothetical that a customer goes into your store for the sole purpose of wasting your support staff's time. They are not going to make a purchase. They are also not directly committing a crime. They are just hurting your business for no particular reason.

Should you, as a business owner, be forced to allow them to continue to be on your property?

I think the ideal answer is yes for critical public spaces, and no for ordinary retail.

Steam clearly falls into the latter category and should be free to ban customers for any reason save discrimination against protected classes.


> I do not believe that refusing to do business with an individual, where your business provides a non-life-critical service, is retaliation.

This isn't accurate. It might not threaten your life or pose any great hurdle to overcome but retaliation has nothing to do with that. If they did it in response to an action you took not to solve a problem but instead out of spite or to otherwise get back at you then it is retaliation.

That isn't the same as refusing to do business with someone who isn't productive to associate with. The two are entirely separate categories.

Of course any business (including Steam) will attempt to argue that an instance of the former is actually the latter, and a difficult customer will attempt to argue that an instance of the latter is actually the former. Regardless, Steam (and most other businesses) behave in a clearly retaliatory manner regarding chargebacks. In cases where the company failing to respect the individual's legal rights is what led to the chargeback that shouldn't be permissible.

To frame it in the terms you used, any otherwise legal activity stemming directly from the company having violated an individual's legal rights should be treated in the same way that a protected class is.


I think someone exercising their legal rights, such as their right to enter a business open to the public and their right to free speech inside that establishment, in a way that harms the business should be something a business can "punish" by refusing to do business with that individual.

I do not think it would be good public policy to prohibit this. I also don't believe, in the United States at least, this conduct is currently legally prohibited.

I previously gave an example of a situation in which I think the correct resolution is for the business to, as you put it, retaliate against someone exercising their legal rights.

A second example of the same type of retaliation is a business denying future sales to an individual who repeatedly purchases and then returns physical merchandise. I think blacklisting that individual is both morally and legally sound.

For the record, I think the definition of "retaliation" needs to include a desire to harm the other party. If your only desire is self-protection, I do not believe it qualifies as retaliation.


It's certainly retaliation if you can't use something you already paid for.


A limited account is allowed access to all prior purchases. It can even download those purchases again (incurring costs on Valve's part without paying anything).

I don't believe anything was rescinded in the situation being discussed; Valve just prevented the user from continuing to use their community/marketplace services. This makes sense because they were put into the bucket containing fraudulent or abusive user accounts.


Are you saying it's fine, iyo, for companies to use market position to work around consumer protection laws? I don't feel like Valve/Steam should be allowed to sell games they know are broken and then refuse refunds (they could also fix them!).

>can even download those

So what you're saying is I should find a fat juicy data pipe somewhere and download stuff from Steam until I fill /dev/null... ;oP

Seriously the. 15 minutes or so of support time will have cost more than the game did in this case, but it really is the principle. Stealing lots of small amounts from lots of people is still criminally dishonest.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: