Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

He's running a 35B parameter model. Frontier models are well over a trillion parameters at this point. Parameters = smarts. There are 1T+ open source models (e.g. GLM5), and they're actually getting to the point of being comparable with the closed source models; but you cannot remotely run them on any hardware available to us.

Core speed/count and memory bandwidth determines your performance. Memory size determines your model size which determines your smarts. Broadly speaking.

 help



The architecture is also important: there's a trade-off for MoE. There used to be a rough rule of thumb that a 35bxa3b model would be equivalent in smarts to an 11b dense model, give or take, but that's not been accurate for a while.

> There are 1T+ open source models (e.g. GLM5),

GLM-5 is ~750B model.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: