Hacker Newsnew | past | comments | ask | show | jobs | submit | samwho's commentslogin

Thank you!

“Zero point” is how I saw it referred to in the literature, so that’s what I went with. I personally prefer to think of it as an offset, but I try to stick with terms folks are likely to see in the wild.

Fair enough, thanks!

You’re welcome! Thanks so much for the kind words.

Definitely could be, but in the time I spent talking to the 4-bit models in comparison to the 16-bit original it seemed surprisingly capable still. I do recommend benchmarking quantized models at the specific tasks you care about.

Thank you! I was really surprised how robust models are to losing information. It seems wrong that they can be compressed so much and still function at all, never mind function quite closely to the original size.

Think we're only going to keep seeing more progress in this area on the research side, too.


You can even train in 4 & 8 bits with newer microscaled formats! From https://arxiv.org/pdf/2310.10537 to gpt-oss being trained (partially) natively in MXFP4 - https://huggingface.co/blog/RakshitAralimatti/learn-ai-with-...

To Nemotron 3 Super, which had 25T of nvfp4 native pretraining! https://docs.nvidia.com/nemotron/0.1.0/nemotron/super3/pretr...


Newer quantization approaches are even better, 4-bits gets you no meaningful loss relative to FP16: https://github.com/z-lab/paroquant

Hopefully Microsoft keeps pushing BitNet too, so only "1.58" bits are needed.

I think fractional representations are only relevant for training at this point, and bf16 is sufficient, no need for fp4 and such.


Learned rotations for INT4 are cool! Seems similar to SpinQuant? https://arxiv.org/abs/2405.16406

In my personal opinion I don’t think the 1.58 bit work is going to make it into the mainstream.

Not sure why you think fractional representations are only useful for training? Being able to natively compute in lower precisions can be a huge performance boost at inference time.


> Learned rotations for INT4 are cool! Seems similar to SpinQuant? https://arxiv.org/abs/2405.16406

Indeed, but much better! More accurate, less time and space overhead, beats AWQ on almost every bench. I hope it becomes the standard.

> In my personal opinion I don’t think the 1.58 bit work is going to make it into the mainstream.

I hope you're wrong! I'm more optimistic. Definitely a bit more work to be done, but still very promising.

> Being able to natively compute in lower precisions can be a huge performance boost at inference time.

ParoQuant is barely worse than FP16. Any less precise fractional representation is going to be worse than just using that IMO.


Thanks for linking to my silly little quiz in the article! :)


I wrote a tool called llmwalk (https://github.com/samwho/llmwalk) that’ll deterministically show you the likelihood the top N answers are for a given open model and prompt. No help on frontier models, but maybe helpful if you want to run a similar analysis more quickly on open models!


My mistake! Misunderstood the underlying dataset. Not sure how to edit it though. Thanks for calling it out.


Oh, you beat me to it! :D


I love this, the end result looks so good.

Something you don’t really mention in the post is why do this? Do you have an end goal or utility in mind for the book shelf? Is it literally just to track ownership? What do you do with that information?


Thanks! Honestly, there’s no big utility behind it. I didn’t build it to optimize anything or track data, it just felt good to make.

I want my website to slowly become a collection of things I do and like, and this bookshelf is just one of those pieces.


I like that it's fun, and that is what AI vibe coding should be.


Thank you <3


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: