Hacker Newsnew | past | comments | ask | show | jobs | submit | coder543's commentslogin

If you search the model card[0], there is a section titled "Code for processing Audio", which you can probably use to test things out. But, the model card makes the audio support seem disappointing:

> Audio supports a maximum length of 30 seconds.

[0]: https://huggingface.co/google/gemma-4-26B-A4B-it#getting-sta...


The E2B and E4B models support 128k context, not 256k, and even with the 128k... it could take a long time to process that much context on most phones, even with the processor running full tilt. It's hard to say without benchmarks, but 128k supported isn't the same as 128k practical. It will be interesting to see.

That Pareto plot doesn't seem include the Gemma 4 models anywhere (not just not at the frontier), likely because pricing wasn't available when the chart was generated. At least, I can't find the Gemma 4 models there. So, not particularly relevant until it is updated for the models released today.

There are issues with the chat template right now[0], so tool calling does not work reliably[1].

Every time people try to rush to judge open models on launch day... it never goes well. There are ~always bugs on launch day.

[0]: https://github.com/ggml-org/llama.cpp/pull/21326

[1]: https://github.com/ggml-org/llama.cpp/issues/21316


What causes these? Given how simple the LLM interface is (just completion), why don't teams make a simple, standardized template available with their model release so the inference engine can just read it and work properly? Can someone explain the difficulty with that?

The model does have the format specified but there is no _one_ standard. For this model it’s defined in the [ tokenizer_config.json [0]. As for llama.cpp they seem to be using a more type safe approach to reading the arguments.

[0] https://huggingface.co/google/gemma-4-31B-it/blob/main/token...


Hm, but surely there will be converters for such simple formats? I'm confused as to how there can be calling bugs when the model already includes the template.

was just merged

It was just an example of a bug, not that it was the only bug. I’ve personally reported at least one other for Gemma 4 on llama.cpp already.

In a few days, I imagine that Gemma 4 support should be in better shape.


For the many DGX Spark and Strix Halo users with 128GB of memory, I believe the ideal model size would probably be a MoE with close to 200B total parameters and a low active count of 3B to 10B.

I would personally love to see a super sparse 200B A3B model, just to see what is possible. These machines don't have a lot of bandwidth, so a low active count is essential to getting good speed, and a high total parameter count gives the model greater capability and knowledge.

It would also be essential to have the Q4 QAT, of course. Then the 200B model weights would take up ~100GB of memory, not including the context.

The common 120B size these days leaves a lot of unused memory on the table on these machines.

I would also like the larger models to support audio input, not just the E2B/E4B models. And audio output would be great too!


200a10b please, 200a3b is too little active to have good intelligence IMO and 10b is still reasonably fast.

Following the current rule of thumb MoE = `sqrt(param*active)` a 200B-A3B would have the intelligence of a ~24B dense model.

That seems pointless. You can achieve that with a single 24G graphics card already.

I wonder if it would even hold up at that level, as 3B active is really not a lot to work with. Qwen 3.5 uses 122B-A10B and still is neck and neck with the 27B dense model.

I don't see any value proposition for these little boxes like DGX Spark and Strix Halo. Lots of too-slow RAM to do anything useful except run mergekit. imo you'd have been better building a desktop computer with two 3090s.


That rule of thumb was invented years ago, and I don’t think it is relevant anymore, despite how frequently it is quoted on Reddit. It is certainly not the "current" rule of thumb.

For the sake of argument, even if we take that old rule of thumb at face value, you can see how the MoE still wins:

- (DGX Spark) 273GB/s of memory bandwidth with 3B active parameters at Q4 = 273 / 1.5 = 182 tokens per second as the theoretical maximum.

- (RTX 3090) 936GB/s with 24B parameters at Q4 = 936 / 12 = 78 tokens per second. Or 39 tokens per second if you wanted to run at Q8 to maximize the memory usage on the 24GB card.

The "slow" DGX Spark is now more than twice as fast as the RTX 3090, thanks to an appropriate MoE architecture. Even with two RTX 3090s, you would still be slower. All else being equal, I would take 182 tokens per second over 78 any day of the week. Yes, an RTX 5090 would close that gap significantly, but you mentioned RTX 3090s, and I also have an RTX 3090-based AI desktop.

(The above calculation is dramatically oversimplified, but the end result holds, even if the absolute numbers would probably be less for both scenarios. Token generation is fundamentally bandwidth limited with current autoregressive models. Diffusion LLMs could change that.)

The mid-size frontier models are rumored to be extremely sparse like that, but 10x larger on both total and active. No one has ever released an open model that sparse for us to try out.

As I said, I wanted to see what it is possible for Google to achieve.

> Qwen 3.5 uses 122B-A10B and still is neck and neck with the 27B dense model.

From what I've seen, having used both, I would anecdotally report that the 122B model is better in ways that aren't reflected in benchmarks, with more inherent knowledge and more adaptability. But, I agree those two models are quite close, and that's why I want to see greater sparsity and greater total parameters: to push the limits and see what happens, for science.


Kimi 2.5 is relatively sparse at 1T/32B; GLM 5 does 744B/40B so only slightly denser. Maybe you could try reducing active expert count on those to artificially increase sparsity, but I'm sure that would impact quality.

Reducing the expert count after training causes catastrophic loss of knowledge and skills. Cerebras does this with their REAP models (although it is applied to the total set of experts, not just routing to fewer experts each time), and it can be okay for very specific use cases if you measure which experts are needed for your use case and carefully choose to delete the least used ones, but it doesn't really provide any general insight into how a higher sparsity model would behave if trained that way from scratch.

Large MoE models are too heavily bottlenecked on typical discrete GPUs. You end up pushing just a few common/non-shared layers to GPU and running the MoE part on CPU, because the bandwidth of PCIe transfers to a discrete GPU is a killer bottleneck. Platforms with reasonable amounts of unified memory are more balanced despite the lower VRAM bandwidth, and can more easily run even larger models by streaming inactive weights from SSD (though this quickly becomes overkill as you get increasingly bottlenecked by storage bandwidth: you'd be better off then with a plain HEDT accessing lots of fast storage in parallel via abundant PCIe lanes).

The value prop for the Nvidia one is simple: playing with CUDA with wide enough RAM at okay enough speeds, then running your actual workload on a server someone running the same (not really, lol Blackwell does not mean Blackwell…) architecture.

They’re fine tuning and teaching boxes, not inference boxes. IMO anyway, that’s what mine is for.


> Wild differences in ELO compared to tfa's graph

Because those are two different, completely independent Elos... the one you linked is for LMArena, not Codeforces.


That Codex one comes from the new `github` plugin, which includes a `github:yeet` skill. There are several ways to disable it: you can disconnect github from codex entirely, or uninstall the plugin, or add this to your config.toml:

    [[skills.config]]
    name = "github:yeet"
    enabled = false
I agree that skill is too opinionated as written, with effects beyond just creating branches.

What's weird is, I never installed any github plugins, or indeed any customization to Codex, other than updating using brew... so I was so confused when this started happening.

If you visit https://chatgpt.com/codex/settings/connectors, you're saying you don't have GitHub connected?

Plugins are a new feature as of this past week, so Codex "helpfully" installs the GitHub one automatically if you have GitHub connected.


From my point of view, Parakeet is not very good at formatting the output, so it would be nice if a small model focused on having nicely formatted (and correct) text, not just the lowest WER score. Rewarding the model for inserting logical line breaks, quotation marks, etc.


I wish someone would also thoroughly measure prompt processing speeds across the major providers too. Output speeds are useful too, but more commonly measured.


In my use case for small models I typically only generate a max of 100 tokens per API call, with the prompt processing taking up the majority of the wait time from the user perspective. I found OAI's models to be quite poor at this and made the switch to Anthropic's API just for this.

I've found Haiku to be a pretty fast at PP, but would be willing to investigate using another provider if they offer faster speeds.


OpenRouter has this information


I do not see prompt processing, only some kind of nebulous “throughput” that could be output or input+output, but definitely not input only.


> Siri/iOS-Dictation is truly good when it comes to understanding the speech.

What...? It is terrible, even compared to Whisper Tiny, which was released years ago under an Apache 2.0 license so Apple could have adopted it instantly and integrated it into their devices. The bigger Whisper models are far better, and Parakeet TDT V2 (English) / V3 (Multilingual) are quite impressive and very fast.

I have no idea what would make someone say that iOS dictation is good at understanding speech... it is so bad.

For a company that talks so much about accessibility, it is baffling to me that Apple continues to ship such poor quality speech to text with their devices.


Maybe they have exactly the accent iOS dictation was trained to recognize.


Its quality isn’t great, but it is damn fast and that matters a lot! Whisper doesn’t even work live without hacks.


Parakeet is insanely fast and much more accurate, and it doesn't really matter that Whisper requires hacks to work live when those hacks have existed for years and work great. (The Hello Transcribe app on iOS is a great example of how well Whisper can work with live streaming on an iPhone. The smaller models are extremely fast, even with the "hacks".)

Parakeet TDT's architecture is actually a really cool way to boost both the speed and efficiency of real time STT compared to traditional approaches.


Terrible? It's fine. What's your accent that it's terrible? It even pulls last names from my address book and spells them right.


Terrible relative to everything else that exists today. I have a neutral American accent.

Maybe you just don’t know what you’re missing? Google’s default speech to text is still bad compared to Whisper and Parakeet, but even Google’s is markedly better than Apple’s.

I cannot think of a single speech to text system that I’ve run into in the past 5 years that is less accurate than the one Apple ships.

Sure, Apple’s speech to text is incredible compared to what was on the flip phone I had 20 years ago. Terrible is relative. Much better options exist today, and they’re under very permissive licenses. Apple’s refusal to offer a better, more accessible experience to their users is frustrating when they wouldn’t even have to pay a licensing fee to ship something better. Whisper was released under a permissive license nearly 4 years ago.

Apple also restricts third party keyboards to an absurdly tiny amount of memory, so it isn’t even possible to ship a third party keyboard that provides more accurate on-device speech to text without janky workarounds (requiring the user to open the keyboard's own app first each time).


As someone who tried every TTS in existance a few years ago for some product work, Apple’s is so consistantly better that we wound up getting a bunch of apple stuff just for the TTS.


“A few years ago” sounds like it could be before the modern era of STT, as defined by when Whisper was released.

Your comment says TTS, which is different from what I’m discussing, though, so there might be some confusion.


> I have a neutral American accent

This is tangential but is _any_ accent objectively neutral?


Neutral here means not strongly identifiable as any particular regional American accent. Some people have very strong regional accents, some don’t. It is still clearly an American accent, not British or anything else.


TIL (assuming you mean American as in the United States of America) that this is known as a General American accent.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: