Right now they're comparing to the 7B Llama 2, which is a shadow of 65B llama2, which is a couple steps off ChatGPT, which is a shadow of GPT4.
I'm still comfy saying yes because there's no reason to doubt it will follow the same logic as "eventually $phone will have the same FLOPS as $desktop"
This link discusses, along with other usefulness, the use of dependent types for the likes of array bounds checking, creating "sorts" of numbers satisfying various predicates, and more interestingly the utility of combining Dependent Types with PURE Linear Types (as opposed to Affine Types a la Rust lol) ...
AFAIK, DML was originally intended as a gateway to constructing typed assembly (DARPA)
Having been a researcher in this field for, this quote from the original paper on 'Liquid Types' originally seemed suspect....
"a system that combines Hindley-Milner type inference with Predicate Abstraction to automatically infer dependent types precise enough to prove a variety of safety properties" .... type inference with dependent types is undecidable!? Then again, this is one of the primary focuses of ATS3, I digress...
This area of research is rich in pipe-dreams and cerebral investigation. From my biased opinion, ATS3 (the current version of the successor3 of DML, being actively devloped by the PI Professor Hongwei Xi, is the only language in this space to be aimed at practical programming in industry... of course there is always haskell....)
What I mean is that, TEMPLATES++ Linear Types + Dependent Types + Hindley Milner Type inference ....
At the end of the day, who doesn't love hyper efficient meta-programming magic?
ATS is great, I’ve used ATS1 and ATS2 quite a bit and it worked well. Unfortunately it suffers a bit from the “rewrite every few years” as research directions change.
"Perhaps achieving ChatGPT’s level of capability at the one billion parameters scale is actually achievable?"