The biggest problem with this concept is that while ASICs can be extremely efficient, FPGAs are much less efficient than the equivilent ASIC. The flexibility comes at a substantial power, chip area, and speed cost. So much so that e.g. raw number crunching is more efficiently done by CPUs or GPUs in almost all cases. You need a particularly quirky computation before an FPGA is a good accelerator. FPGAs are more naturally suited to applications where ultra-low (or ultra-predictable) latency, extremely high bandwidth I/O (with relatively little processing), or particularly specialised DSP is required. Most of these are best served with an FPGA with a CPU attached, as opposed to the other way around, and the cost of the FPGA is not likely to be worth the cost to the majority of users. For an example of a specialised use-case: ASIC designers use racks of them to simulate the digital logic in large-scale designs, which are otherwise far too difficult to simulate on a CPU because CPUs really struggle to simulate billions of seperate logic elements individually, and latency is a real killer for parallel processing. Even so, they run much, much slower than realtime.