TPM is more or less an API specification. The specification is fine but people are worried about implementation backdoors and pre-provisioned keys. It should be possible to have an open source public trustable implementation that anyone can synthesise onto an FPGA or a real chip design. This ought to avoid fears about backdoors, while keeping a mature security model and good software support. I suspect there isn't sufficient demand or skill for such a project.
> I suspect there isn't sufficient demand or skill for such a project.
IMHO more like: There is little to no profit in this, nor much motivations for AMD/Intel to provide this. But it does involve additional work, especially if the fTPM implementation they use currently does (partially) use code they got from other companies which they can't open source.
Through you wouldn't need a FPGA, for a TPM to really be secure you want it to be integrated into the CPU. And most times this means it's not a "special" physical chip but just a "standard co-processor" running some software. E.g. in case of ARM Android smart phones it's likely a more or less normal Cortex M0 processor (and it likely runs more then "just" a TPM, e.g. some DRM pipe protection code).
So theoretically you would just need to publish the "bar metal" code and anyone could analyze it and then build and run it e.g. using qemu (if qemu can handle co-processors idk.). And by also allowing to extract the build code from the CPU combined with reproducible builds you could also verify it runs what it says it runs (kinda, I mean who says there isn't a hardware backdoor rewriting the code, and it being a FPGA doesn't help there because the FPGA hardware could also rewrite the FPGA bin-code.... at least theoretically)
As I'm sure you're aware FPGAs are nice because they can complicate things for an attacker in the physical supply chain. Andrew "bunnie" Huang did an excellent talk on the subject: