A nice intro/showcase to Coq, I suppose. But the triviality of this frankly makes it difficult for me to understand what value this has and what it teaches us - we've just proven that one kind of syntax is equivalent to another, because of an intuitionistic tautology. What I'd like to know is what it would mean for the rust type system if this weren't true, and therefore, what is really the difference between rust opaque types and generics in function signatures other than syntactic, and their formulation?
In 2022, "state of the art" is throwing a deep net at it. It will likely pick up on all of these findings (and better ones, incomprehensible to us) by itself given correct architecture and enough data, but I can't help but feel a bit saddened by this - seeing the ingenuity and mastery of all these cited names be obscured and superseded so easily, in a way.
I love advancement in the field and what machine learning will enable us to do, but I don't know what to make of this. One argument is that now we have engineers who design the machine learning models, but it is still depressing to me, for some reason. Never knew I would feel like this, am I the only one?
P.S.: I'm commenting purely on this topic, which is an ideal big data case - of course, we still have a long way to go with machine learning, one where human minds will have to especially shine.