While I wouldn't say that the problem has been "well studied", the research community has been paying interest, and some progress has been made, most notably- as pointed out by benjaminjackman- LIME[1]. Roughly speaking, LIME learns a locally approximate model which can be interpreted. It will work will any black box model, not just neural networks.
Not that I know of, but at their analyst day in May 2017 AMD showed slides in which they beat Nvidias GP100 in Baidus Deep Bench with their new GPU Vega. [0][1] Since AMD recently released the Vega Frontier Edition [3][4] I posted this here in the hopes to see some benchmarks from users here on HN.
When you take unknown code and datasets under OpenCL it isn't hard to make any delta you want.
Using OpenCL code with batch size which favors one or the the other is enough to cause this (and much) higher delta.
DeepBench isn't a benchmark, it's a benchmarking tool overall there is very little chance that given the current state of NVIDIAs BLAST libraries and the rest of their eco system that Vega is going to be beating it's hardware.
For someone who wants an in-depth look at sci-fi interfaces from a designer's perspective, https://scifiinterfaces.wordpress.com/ and the corresponding book [1] are pretty good resources. The book was even featured on YC's Winter (Summer?) reading list.
[1] https://www.reddit.com/r/privacy/comments/76c605/airborn_os_...