Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

If you could replace all the vendor keys (including the firmware signing keys) with your own then TPM could make sense. But today's TPMs don't support that.


> If you could replace all the vendor keys [...]

You very much can. It's trivial.

TPMs have four key "hierarchies" each of which has a seed. Of those four, one (the "null" hierarchy) gets a random seed each time the TPM is reset, while the other three (the platform, endorsement, and owner hierarchies) have their seeds stored in EEPROM/NVRAM, and there are functions in the spec for replacing those with new, randomly generated seeds.

All primary keys in a TPM are derived from seeds, and the derived keys are not stored anywhere -- they are always derived as needed from the hierarchy's seed and a given template. Therefore changing a hierarchy's seed loses all access to primary keys previously used in that hierarchy.

All other keys are saved off-chip encrypted to a primary key. Thus rotating a hierarchy's see loses all access to all keys previously used in that hierarchy (not just the primary keys).

The only thing here that is remotely problematic here is that you have to trust the TPM's RNG. If you're paranoid you might believe that the RNG is itself a PRNG with a hidden seed and that the manufacturer knows it. But if you roll the endorsement hierarchy seed and delete the endorsement key certificate from the TPM, then how will the manufacturer identify the TPM in order to look up its putative hidden RNG seed? Even if they could find it, how would they know which RNG output was used as the hierarchy's new seed?

So, yes, you can "replace all the vendor keys". It really is trivial. However, before you do it you may want to use the existing keys and certificates to bootstrap (enroll) the host into your organization's network and then change the seeds and certify various public keys as derived after the seeds are changed so that you can continue using the TPM for attestation.


You cut off the "including the firmware signing keys" part, that's critical. Otherwise the vendor could be coerced or subverted to sign malicious firmware which then subverts your system at runtime.

Only when you can bring your own keys for the entire boot and trust chain can you untether yourself from the vendor once you have purchased the hardware.



Yes, you can get compromised through firmware updates. But then, not using a TPM also leaves you vulnerable to firmware and software updates. If NSA has compromised all TPM vendors, then you can expect that they've compromised much more still, and so you've basically lost the fight against them. Key management is always a weak link in the chain.

I.e., I'm objecting to this focus on TPM in TFA and this discussion because a voltage fault injection vulnerability in the SP is fatal to security regardless of TPM usage/non-usage. I'm also objecting to the idea that TPM adds vulnerabilities when a non-TPM-using system already is full of ways for NSA and/or other such agencies to backdoor it.


> Yes, you can get compromised through firmware updates.

It's not just about updates through regular channels but about evil maid attacks with signed malicious firmware. This vector would be avoidable if you could sever the trust relationship.

Another wrinkle is that some of the blobs are encrypted (e.g. ME), so they can't even be audited.

Currently too much of the trust chain relies on untrustworthy components. So you can't trust the system. But the DRM vendor can well enough for their purposes. Which makes them a negative.

> I.e., I'm objecting to this focus on TPM in TFA and this discussion because a voltage fault injection vulnerability in the SP is fatal to security regardless of TPM usage/non-usage.

Yes, agreed.


> Currently too much of the trust chain relies on untrustworthy components.

This will always be true unless you build all the components yourself. And you don't have time to build all the components yourself. Therefore this will always be true.

With root of trust measurement you get to see that you're running code you've arbitrarily decided to trust. Everything else you could do would be mitigations (e.g., look for access patterns that imply compromise) or attempts to suss out vulnerable and/or backdoored components (e.g., reverse engineering and analysis). Not that one should not do those other things, but that root of trust measurement is still both, essential and insufficient.

Remember: perfection is the enemy because it's unattainable. We can tilt at windmills, but that won't get us anywhere.


> It's not just about updates through regular channels but about evil maid attacks with signed malicious firmware.

Yes, evil maid attacks are the primary way for targeted attacks using malicious firmware.

So this is something that maybe the TCG should tackle. It should be possible (maybe it is?) to require that the host meet some policy before the firmware update can run -- this would prevent unauthenticated evil maid attacks.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: