Thanks for posting this, that's how I first found out about Dan's experiment!
SSD speed doubled in the M5P/M generation, that makes it usable!
I think one paper under the radar is "KV Prediction for Improved Time to First Token" https://arxiv.org/abs/2410.08391 which hopefully can help with prefill for Flash streaming.
That’s exactly what I thought about. Getting my hands on an M5 Max this week and going to see hows Dan’s experiment performs with faster I/O. Also going to experiment with running active parameters at Q6 or Q8 since output is I/O bottlenecked there should room for higher accuracy compute.
That was a very good summary. One detail the post could use is mentioning that 4 or 10 experts invoked where selected from the 512 experts the model has per layer (to give an idea of the savings).
Running larger-than-RAM LLMs is an interesting trick, but it's not practical. The output would be extremely slow and your computer would be burning a lot of power to get there. The heavy quantizations and other tricks (like reducing the number of active experts) used in these demos severely degrade the quality.
With 64GB of RAM you should look into Qwen3.5-27B or Qwen3.5-35B-A3B. I suggest Q5 quantization at most from my experience. Q4 works on short responses but gets weird in longer conversations.
>I suggest Q5 quantization at most from my experience. Q4 works on short responses but gets weird in longer conversations.
There are dynamic quants such as Unsloth which quantize only certain layers to Q4. Some layers are more sensitive to quantization than others. Smaller models are more sensitive to quantization than the larger ones. There are also different quantization algorithms, with different levels of degradation. So I think it's somewhat wrong to put "Q4" under one umbrella. It all depends.
I've tried a number of experiments, and agree completely. If it doesn't fit in RAM, it's so slow as to be impractical and almost useless. If you're running things overnight, then maybe, but expect to wait a very long time for any answers.
Current local-AI frameworks do a bad job of supporting the doesn't-fit-in-RAM case, though. Especially when running combined CPU+GPU inference. If you aren't very careful about how you run these experiments, the framework loads all weights from disk into RAM only for the OS to swap them all out (instead of mmap-ing the weights in from an existing file, or doing something morally equivalent as with the original MacBook Pro experiment) which is quite wasteful!
This approach also makes less sense for discrete GPUs where VRAM is quite fast but scarce, and the GPU's PCIe link is a key bottleneck. I suppose it starts to make sense again once you're running the expert layers with CPU+RAM.
Yes, SSD speed is critical though. The repo has macOS builds for CLI and Desktop.
It's early stages though. M4 Max gets 10-15 TPS on 400B depending on quantization. Compute is an issue too; a lot of code is PoC level.
I have a 64G/1T Studio with an M1 Ultra. You can probably run this model to say you’ve done it but it wouldn’t be very practical.
Also I wouldn’t trust 3-bit quantization for anything real. I run a 5-bit qwen3.5-35b-A3B MoE model on my studio for coding tasks and even the 4-bit quant was more flaky (hallucinations, and sometimes it would think about running tools calls and just not run them, lol).
If you decided to give it a go make sure to use the MLX over the GGUF version! You’ll get a bit more speed out of it.
But the claim that "one expert is 17B" is incorrect. Experts are picked with per-layer granularity (expert 1 for layer X may well be entirely unrelated to expert 1 for layer Y), and the individual layer-experts are tiny. The writeup for the original experiment is very clear on this.
Ok I am by no means an expert on this and I immediately stand corrected. But as I understand it, in order to understand the amount of active memory that’s required, it’s more accurate to go by the ~82B number, right?
So much this! I've been bugging Astral about addressing the sandboxing challenge for a while, I wonder if that might take more priority now they're at OpenAI?
Then give me your version of why it's not reasonable for the Python packaging community (who are the recipients of this data, it doesn't go to Astral) to want to collect aggregate numbers against those platform details.
Any telemetry should be done after explicit user consent, period. The harm is that you normalize total surveillance with these little, seemingly innocent steps.
If you have hundreds of different Python projects on your machine (as I do) the speed and developer experience improvements of uv make a big difference.
I love being able to cd into any folder and run "uv run pytest" without even having to think about virtual environments or package versions.
Not really. I have good backups and I try to stick with dependencies I trust.
I do a lot of my development work using Claude Code for web which means stuff runs in containers on Anthropic's servers, but I run things on my laptop most days as well.
The field that guesses if something is running in a CI environment is particularly useful, because it helps package authors tell if their package is genuinely popular or if it's just being installed in CI thousands of times a day by one heavy user who doesn't cache their requirements.
Honestly, stripping this data and then implying that it was collected by Astral/OpenAI in a creepy way is a bad look for this new fork. They should at least clarify in their documentation what the "telemetry" does so as not to make people think Astral were acting in a negative way.
Personally I think stripping the telemetry damages the Python community's ability to understand the demographics of package consumption while not having any meaningful impact on end-user privacy at all.
This is so upsetting. No wonder people spend more time in mobile apps than they do using the mobile web - the default web experience on so many sites is terrible.
I suspect I will too. I’ve been playing with the app a bit as it’s easier for me on my phone to view subs that are mostly pictures (e.g. awuariums). But I only do it from time to time.
It kind of doesn’t matter. The thing that makes Reddit, to me, is its size. Lemmy will never get there, so it won’t be able to replace it for me.
I love Mastodon, it’s what I use, but it’s not what I lost with Twitter. Some stayed, some went to BlueSky, some Threads, some just gave up.
And we’ll never have it again. Assholes destroyed a whole world out of selfishness.
I’m honestly amazed they tried that. It’s been so long, it felt like a play to cache in on the name but I feel like a huge chunk of people don’t really remember it or weren’t even around for it.
To say nothing of all the personal data the app is hoovering up. Guarantee that every last thing you granted permissions for is something they're monetizing.
I left that page open in Firefox on macOS (no ad blockers) and after five minutes the network devtools panel showed me it had hit 200MB transferred, 250MB total from over 2,300 requests.