If you're new to this: All of the open source models are playing benchmark optimization games. Every new open weight model comes with promises of being as good as something SOTA from a few months ago then they always disappoint in actual use.
I've been playing with Qwen3-Coder-Next and the Qwen3.5 models since they were each released.
They are impressive, but they are not performing at Sonnet 4.5 level in my experience.
I have observed that they're configured to be very tenacious. If you can carefully constrain the goal with some tests they need to pass and frame it in a way to keep them on track, they will just keep trying things over and over. They'll "solve" a lot of these problems in the way that a broken clock is right twice a day, but there's a lot of fumbling to get there.
That said, they are impressive for open source models. It's amazing what you can do with self-hosted now. Just don't believe the hype that these are Sonnet 4.5 level models because you're going to be very disappointed once you get into anything complex.
Respectfully, from my experience and a few billions of tokens consumed, some opensource models really are strong and useful. Specifically StepFun-3.5-flash https://github.com/stepfun-ai/Step-3.5-Flash
I'm working on a pretty complex Rust codebase right now, with hundreds of integration tests and nontrivial concurrency, and stepfun powers through.
I have no relation to stepfun, and I'm saying this purely from deep respect to the team that managed to pack this performance in 196B/11B active envelope.
What coding agent do you use with StepFun-3.5-flash? I just tried it from siliconflow's api with opencode. The toolcalling is broken:
AI_InvalidResponseDataError: Expected 'function.name' to be a string.
I think we are at this point where the hard ceiling of a strong model is pretty hard to delineate reliably (at least in coding, in research work it's clearer ofc) - and in a good sense, meaning with suitable task decomposition or a test harness or a good abstraction you can make the model do what you thought it could not. StepFun is a strong model and I really enjoyed studying and comparing it to others by coding pretty complex projects semi-autonomously (will do a write up on this soon tm).
Even purely pragmatically, StepFun covers 95% of my research+SWE coding needs, and for the remaining 5% I can access the large frontier models. I was surprised StepFun is even decent at planning and research, so it is possible to get by with it and nothing else (1), but ofc for minmaxing the best frontier model is still the best planner (although the latest deepseek is surprisingly good too).
Finally we are at a point where there is a clear separation of labor between frontier & strong+fast models, but tbh shoehorning StepFun into this "strong+fast" category feels limiting, I think it has greater potential.
I pay for copilot to access anthropic, google and openai models.
Claude code always give me rate limits. Claude through copilot is a bit slow, but copilot has constant network request issues or something, but at least I don't get rate limited as often.
At least local models always work, is faster (50+ tps with qwen3.5 35b a4b on a 4090) and most importantly never hit a rate limit.
But qwen3.5 35b is worse than even Claude Haiku 4.5. You could switch your Claude Code to use Haiku and never hit rate limits. Also gets similar 50tps.
I haven't tried 4.5 haiku much, but i was not impressed with previous haiku versions.
My goto proprietary model in copilot for general tasks is gemini 3 flash which is priced the same as haiku.
The qwen model is in my experience close to gemini 3 flash, but gemini flash is still better.
Maybe it's somewhat related to what we're using them for. In my case I'm mostly using llms to code Lua. One case is a typed luajit language and the other is a 3d luajit framework written entirely in luajit.
I forgot exactly how many tps i get with qwen, but with glm 4.7 flash which is really good (to be local) gets me 120tps and a 120k context.
Don't get me wrong, proprietary models are superior, but local models are getting really good AND useful for a lot of real work.
I also started playing with 3.5 Flash and was impressed.
It’s 2× faster than its competitors. For tasks where “one-shotting” is unrealistic, a fast iteration loop makes a measurable difference in productivity.
> some opensource models really are strong and useful
To be clear I never said they weren’t strong or useful. I use them for some small tasks too.
I said they’re not equivalent to SOTA models from 6 months ago, which is what is always claimed.
Then it turns into a Motte and Bailey game where that argument is replaced with the simpler argument that they’re useful for open weights models. I’m not disagreeing with that part. I’m disagree with the first assertion that they’re equivalent to Sonnet 4.5
They are not equivalent 1:1, esp. in knowledge coverage (given OOM param size difference) and in taste (Sonnet wins, but for taste one can also use Kimi K2.5), but in my hardcore use (high-performance realtime simulations of various kinds) I would prefer StepFun-3.5-Flash to Sonnet 4 strongly and to 4.5 often enough without a decisive advantage in using exclusively Sonnet 4.5. For truly hard tasks or specifications I would turn to 5.2 or 5.3-codex of course - but one KPI for quality of my work as a lead engineer is to ensure that truly hard tasks are known, bounded and planned-for in advance.
Maybe my detailed, requirement-based/spec-based prompting style makes the difference between anthropic's and OSS models smaller and people just like how good Anthropic's models are at reading the programmer's intent from short concise prompts.
Frankly, I think the 1:1 equivalent is an impossible standard given the set of priorities and decisions frontier labs make when setting up their pre-, mid- and post-training pipelines, and benchmark-wise it is achievable for a smaller OSS model to align with Sonnet 4.5 even on hard benchmarks.
Given the relatively underwhelming Sonnet 4.5 benchmarks [1], I think StepFun might have an edge over it esp. in Math/STEM [2] - even an old deepseek-3.2 (not speciale!) had a similar aggregate score. With 4.6 Anthropic ofc vastly improved their benchmark game, and it now truly looks like a frontier model.
Yes and no. "Last-gen" (like, from 6 months ago) frontier models do still tend to outperform the best open source models. But some models, especially GLM-5, really have captured whatever circuitry drives pattern matching in the models they were trained off of.
I like this benchmark that competes models against one another in competitive environments, which seems like it can't really be gamed: https://gertlabs.com
> Yes and no. "Last-gen" (like, from 6 months ago) frontier models do still tend to outperform the best open source models
That’s exactly what I said, though. The headline we’re commenting under claims they’re Sonnet 4.5 level but they’re not.
I don’t disagree that they’re powerful for open models. I’m pointing out that anyone reading these headlines who expects a cheap or local Sonnet 4.5 is going to discover that it’s not true.
I'm using Qwen 3.5 27b on my 4090 and let me tell you. This is the first time I am seriously blown away by coding performance on a local model. They are almost always unusable. Not this time though...
Are there any up-to-date offline/private agentic coding benchmark leaderboards?
If the tests haven't been published anywhere and are sufficiently different from standard problems, I would think the benchmarks would be robust to intentional over optimization.
Edit:
These look decent and generally match my expectations:
This is because of the forbidden argument in statistics. Any statistic, even something so basic as an average, ONLY works if you can guarantee the independence of the individual facts it measures.
But there's a problem with that: of course the existence of the statistical measure itself is very much a link between all those individual facts. In other words: if there is ANY causal link between the statistical measure and the events measured ... it has now become bullshit (because the law of large numbers doesn't apply anymore).
So let's put it in practice, say there's a running contest, and you display the minimum, maximum and average time of all runners that have had their turns. We all know what happens: of course the result is that the average trends up. And yet, that's exactly what statistics guarantees won't happen. The average should go up and down with roughly 50% odds when a new runner is added. This is because showing the average causes behavior changes in the next runner.
This means, of course, that basing a decision on something as trivial as what the average running time was last year can only be mathematically defensible ONCE. The second time the average is wrong, and you're basing your decision on wrong information.
But of course, not only will most people actually deny this is the case, this is also how 99.9% of human policy making works. And it's mathematically wrong! Simple, fast ... and wrong.
Hmm, I second this. Haven't compared Qwen3.5 122B yet, but played around with OpenCode + Qwen3-Coder-Next yesterday and did manual comparisons with Claude Code and Claude Code is still far ahead in general felt "intelligence quality".
I’ve switched to using Kimi 2.5 for all of my personal usage and am far from disappointed.
Aside from being much cheaper than the big names (yes, I’m not running it locally, but like that I could) it just works and isn’t a sycophant. Nice to get coding problems solved without any “That’s a fantastic idea!”/“great point” comments.
At least with Kimi my understanding is that beating benchmarks was a secondary goal to good developer experience.
Just going to echo this. Been using K2.5 in opencode as a switch away from Opus because it was too expensive for the sorts of things I was playing with, and it's been great. There's definitely a bit of learning to get the hang of what sort of prompts to give it and to make sure there's enough documentation in the project for it, but it's remarkably capable once you're in the swing of it.
I've been trying to get these things to local host and use tools. Am I right in understanding that it's impossible for these things to use tools from within llama.cpp? Do I need another "thing" to run the models? What exactly is the mechanism by which the models became aware that they're somewhere where they have tools availbale? So many questions...
No, what he is saying is that benchmarks are static and there is tremendous reputational and financial pressure to make benchmark number go up. So you add specific problems to training data... The result is that the model is smarter, but the benchmarks overstate the progress. Sure there are problem sets designed to be secret, but keeping secrets is hard given the fraction of planetary resources we are dedicating to making the AI numbers go up.
I have two of my own comments to add to that. First one is that there is problem alignment at play. Specifically - the benchmarks are mostly self-contained problems with well defined solutions and specific prompt language, humans tasks are open ended with messy prompts and much steerage. Second is that it would be interesting to test older models on brand new benchmarks to see how those compare.
> No, what he is saying is that benchmarks are static and there is tremendous reputational and financial pressure to make benchmark number go up.
That's a much better way to say it than I did.
These models are known for being open weights but they're still products that Alibaba Cloud wants is trying to sell. They have Product Managers and PR and marketing people under pressure to get people using them.
This Venture Beat article is basically a PR piece for the models and Alibaba Cloud hosting. The pricing table is right in the article.
It's cool that they release the models for us to use, but don't think they're operating entirely altruistically. They're playing a business game just like everyone else.
The models outperform on the benchmarks relative to general tasks.
The benchmarks are public. They're guaranteed to be in the training sets by now. So the benchmarks are no longer an indicator of general performance because the specific tasks have been seen before.
> And could quantization maybe explain the worse than expected results?
You can use the models through various providers on OpenRouter cheaply without quantization.
Flawed? Possibly, but I think it's more that any kind of benchmark then becomes a target, and is inherently going to be a "lossy" signal as to the models actual ability in practice.
Quantisation doesn't help, but even running full fat versions of these models through various cloud providers, they still don't match Sonnet in actual agentic coding uses: at least in my experience.
Death by KPIs. Management makes it too risky to do anything but benchmaxx. It will be the death of American AI companies too. Eventually, people will notice models aren’t actually getting better and the money will stop flowing. However, this might be a golden age of research as cheap GPUs flood the market and universities have their own clusters.
I periodically try to run these models on my MBP M3 Max 128G (which I bought with a mind to run local AI). I have a certain deep research question (in a field that is deeply familiar to me) that I ask when I want to gauge model's knowledge.
So far Opus 4.6 and Gemini Pro are very satisfactory, producing great answers fairly fast. Gemini is very fast at 30-50 sec, Opus is very detailed and comes at about 2-3 minutes.
Today I ran the question against local qwen3.5:35b-a3b - it puffed for 45 (!) minutes, produced a very generic answer with errors, and made my laptop sound like it's going to take off any moment.
Wonder what am I doing wrong?.. How am I supposed to use this for any agentic coding on a large enough codebase? It will take days (and a 3M Peltor X5A) to produce anything useful.
You're comparing 100b parameters open models running on a consumer laptop VS private models with at the very least 1t parameters running on racks of bleeding edge professional gpus
Local agentic coding is closer to "shit me the boiler plate for an android app" not "deep research questions", especially on your machine
> Speculation is that the frontier models are all below 200B parameters
Some versions of some the models are around that size, which you might hit for example with the ChatGPT auto-router.
But the frontier models are all over 1T parameters. Source: watch interview with people who have left one of the big three labs and now work at the Chinese labs and are talking about how to train 1T+ models.
Certainly not Opus. That beast feels very heavy - the coherence of longer form prose is usually a good marker, and it is able to spit 4000 words coherent short stories from a single shot.
He's running a 35B parameter model. Frontier models are well over a trillion parameters at this point. Parameters = smarts. There are 1T+ open source models (e.g. GLM5), and they're actually getting to the point of being comparable with the closed source models; but you cannot remotely run them on any hardware available to us.
Core speed/count and memory bandwidth determines your performance. Memory size determines your model size which determines your smarts. Broadly speaking.
The architecture is also important: there's a trade-off for MoE. There used to be a rough rule of thumb that a 35bxa3b model would be equivalent in smarts to an 11b dense model, give or take, but that's not been accurate for a while.
Having tried the Mistral Vibe harness that was supposedly designed for Devstral, that thing is abysmal. I feel sorry for whatever they did to that model, it didn't deserve it.
The thing I most noticed was asking it for help with configuring local MCP servers in Mistral Vibe - something it supports, it literally shows how many MCP servers are connected on the startup screen - it then begins scanning my local machine for servers running "MineCraft Protocol".
I want Mistral to do well, and I use their Voxtral Transcribe 2, that one has been useful. I'd even like a well made Mistral Vibe (c'mon, "oui oui baguette" is a hilarious replacement for "thinking"). But Mistral are so far behind, and they don't seem to even know or accept that they are.
Well Opus and Gemini are probably running on multiple H200 equivalents, maybe multiple hundreds of thousands of dollars of inference equipment. Local models are inherently inferior; even the best Mac that money can buy will never hold a candle to latest generation Nvidia inference hardware, and the local models, even the largest, are still not quite at the frontier. The ones you can plausibly run on a laptop (where "plausible" really is "45 minutes and making my laptop sound like it is going to take off at any moment". Like they said -- you're getting sonnet 4.5 performance which is 2 generations ago; speaking from experience opus 4.6 is night and day compared to sonnet 4.5
> Well Opus and Gemini are probably running on multiple H200 equivalents, maybe multiple hundreds of thousands of dollars of inference equipment.
But if you've got that kind of equipment, you aren't using it to support a single user. It gets the best utilization by running very large batches with massive parallelism across GPUs, so you're going to do that. There is such a thing as a useful middle ground. that may not give you the absolute best in performance but will be found broadly acceptable and still be quite viable for a home lab.
Batching helps with efficiency but you can’t fit opus into anything less than hundreds of thousands of dollars in equipment
Local models are more than a useful middle ground they are essential and will never go away, I was just addressing the OPs question about why he observed the difference he did. One is an API call to the worlds most advanced compute infrastructure and another is running on a $500 CPU.
Lots of uses for small, medium, and larger models they all have important places!!
The biggest gaps are not in hardware or model size. There is a lot of logical fallacy in the industry. Most people believe bigger is better. For model size, compute, tools, etc.
The reality in ML is that small models can perform better at a narrow problem set than large ones.
The key is the narrow problem set. Opus can write you a poem, create a shopping list, and analyze your massive code base.
We trained our model to only focus on coding with our specific agent harness, tools, and context engine. And it’s small enough to fit on an M2 16GB. It’s as good as sonnet 4.5 and way better than qwen3.5:35b-a3b
Well first of all you're running a long intense task on a thermally constrained machine. Your MacBook Pro is optimised for portability and battery life, not max performance under load. And apple's obsession with thinness overrules thermal performance for them. Short peaks will be ok but a 45 minute task will thoroughly saturate the cooling system.
Even on servers this can happen. At work we have a 2U sized server with two 250W class GPUs. And I found that by pinning the case fans at 100% I can get 30% more performance out of GPU tasks which translates to several days faster for our usecase. It does mean I can literally hear the fans screaming in the hallway outside the equipment room but ok lol. Who cares. But a laptop just can't compare.
Something with a desktop GPU or even better something with HBM3 would run much better. Local models get slow when you use a ton of context and the memory bandwidth of a MacBook Pro while better than a pc is still not amazing.
And yeah the heaviest tasks are not great on local models. I tend to run the low hanging fruit locally and the stuff where I really need the best in the cloud. I don't agree local models are on par, however I don't think they really need to be for a lot of tasks.
To your point, one can get a great performance boost by propping the laptop onto a roost-like stand in front of a large fan. Nothing like a cooling system actually built for sustained load but still.
I've seen reports of qwen3.5-35b-a3b spending a ton of time reasoning if the context window is nearly empty-- supposedly it reasons less if you provide a long system prompt or some file contents, like if you use it in a coding agent.
I'm too GPU-poor to run it, but r/LocalLLaMa is full of people using it.
Can confirm. I gave it a variant of the car wash question on a MacBook M4 with 32 GB of RAM. It produced output at a conversational speed, sure, but that started with 6 minutes of thinking output. 6 minutes.
On the plus side, it did figure out the question even without the first sentence that's intended as a bit of a giveaway.
There's definitely something wrong with the thinking mode on this one. I wouldn't be surprised if it gets fixed, either by qwen themselves or with a fine-tune.
Running local AI models on a laptop is a weird choice. The Mini and especially the Studio form factor will have better cooling, lower prices for comparable specs and a much higher ceiling in performance and memory capacity.
I can never see the point, though. Performance isn't anywhere near Opus, and even that gets confused following instructions or making tool calls in demanding scenarios. Open weights models are just light years behind.
I really, really want open weights models to be great, but I've been disappointed with them. I don't even run them locally, I try them from providers, but they're never as good as even the current Sonnet.
I can't speak to using local models as agentic coding assistants, but I have a headless 128GB RAM machine serving llama.cpp with a number of local models that I use on a daily basis.
- Qwen3-VL picks up new images in a NAS, auto captions and adds the text descriptions as a hidden EXIF layer into the image, which is used for fast search and organization in conjunction with a Qdrant vector database.
- Gemma3:27b is used for personal translation work (mostly English and Chinese).
- Llama3.1 spins up for sentiment analysis on text.
Ah yeah, self-contained tasks like these are ideal, true. I'm more using it for coding, or for running a personal assistant, or for doing research, where open weights models aren't as strong yet.
Understood. Research would make me especially leery; I’d be afraid of losing any potential gains as I'd feel compelled to always go and validate its claims (though I suppose you could mitigate it a little bit with search engine tooling like Kagi's MCP system).
Yeah, for sure, I just don't have many of those. For example, the only use I have for Haiku is for summarizing webpages, or Sonnet for coding something after Opus produces a very detailed plan.
Maybe I should try local models for home automation, Qwen must be great at that.
They're like 6 months away on most benchmarks, people already claimed coding wad solved 6 months ago, so which is it? The current version is the baseline that solves everything but as soon as the new version is out it becomes utter trash and barely usable
That's very large models at full quantization though. Stuff that will crawl even on a decent homelab, despite being largely MoE based and even quantization-aware, hence reducing the amount and size of active parameters.
That's just a straw man. Each frontier model version is better than the previous one, and I use it for harder and harder things, so I have very little use for a version that's six months behind. Maybe for simple scripts they're great, but for a personal assistant bot, even Opus 4.6 isn't as good as I'd like.
So it's back to the original question, why spend $5-10k on the Studio, when it will still be 10x slower and half the intelligence vs. $20 Sonnet?.. What is the point (besides privacy) to use local models now for coding?
PS: I can understand that isolated "valuable" problems like sorting photo collection or feeding a cat via ESPHome can be solved with local models.
At least for me, it's cheap. Even Claude Haiku 4.5 would cost over $60 each day for the same token amount, after accounting for electricity costs. I have the hardware for other reasons anyway, so why not use it, avoid privacy issues and save money.
Are the LLMs very useful? That is a whole other discussion...
You can't use a $20 Sonnet subscription for general agentic use cases, you have to pay for API use on a per-token basis. The $20 and $200 subscriptions are widely considered unsustainable as such. If anything, the real competition is third-party cheap inference providers.
I think knowledge of frontier research certainly scale with number of parameters. Also, US labs can pay more money to have researchers provide training data on these frontier research areas.
On the other hand, if indeed open source models and Macbooks can be as powerful as those SOTA models from Google, etc, then stock prices of many companies would already collapsed.
Depending on the specificity of the research, having a model with fewer parameters will come with a higher penalty. If you want a model to perform better at something specific while staying smaller, generally it will take specific training to achieve that.
Your Gemini or Opus question got send to a Texas datacenter where it got queued and processed by a subunit of 80 h200 140gb 1000w cards running a many billion or trillion parameter model. It took less that 200ms to process a single request. Your Claude cliënt decided to spawn 30 sub agents and iterated in a total of 90 requests totalling about 45000ms. Now compare that to your 100b transistor cpu doing something similar. Yes that would be slow.
Right, it was more of a rhetorical question :) With my point being - how are these local models really useful to me now? Is the Only Way ™ to sell my house and build a 8x5090 monster?.. How does that compare to $20/month Opus? (Privacy aside.)
The second order thought from this is... will we get a value-based price leveling soon? If the alternative to a hosted LLM is to build $10-20k+ machine with $500+ monthly energy bills, will hosted price asymptotically climb up to reflect this reality?
Looked at from the other end of the telescope, the other factor is how fast low-end local models can gain capability. This 35b model is absolutely fine on a 4090 in a machine that was about £3000 when I bought it three years ago. Where will what you can run on a 4090, or a 5090, be in six months? That's the interesting question, but we're already well past the point where the uses to which you will be able to put a local model dramatically increase within the depreciation lifespan of the hardware.
We would need a super high end AI accelerator with specialised cooling for less than 3k bucks to make it happen. Consumer gaming graphics card wont fit the bill. Problem is all TSMC capacity is already booked for years to come by the big players to build data center grade hardware with price tags and setup requirements out of consumer reach.
use a larger model like Qwen3.5-122B-A10B quantized to 4/5/6 bits depending on how much context you desire, MLX versions if you want best tok/s on Mac HW.
if you are able to run something like mlx-community/MiniMax-M2.5-3bit (~100gb), my guess if the results are much better than 35b-a3b.
I have the exact same hardware. Was going to do the same thing with the 122B model … I’ll just keep paying Anthropic and he models are just that good. Trying out Gemini too. But won’t pay OpenAI as they’re going to be helping Pete Kegseth to develop autonomous killing machines.
On my 32GB Ryzen desktop (recently upgraded from 16GB before the RAM prices went up another +40%), did the same setup of llama.cpp (with Vulkan extra steps) and also converged on Qwen3-Coder-30B-A3B-Instruct (also Q4_K_M quantization)
On the model choice: I've tried latest gemma, ministral, and a bunch of others. But qwen was definitely the most impressive (and much faster inference thanks to MoE architecture), so can't wait to try Qwen3.5-35B-A3B if it fits.
I've no clue about which quantization to pick though ... I picked Q4_K_M at random, was your choice of quantization more educated?
Quant choice depends on your vram, use case, need for speed, etc. For coding I would not go below Q4_K_M (though for Q4, unsloth XL or ik_llama IQ quants are usually better at the same size). Preferably Q5 or even Q6.
I am a total neophyte when it comes to LLMs, and only recently started poking around into the internals of them. The first thing that struck me was that float32 dimensions seemed very generous.
I then discovered what quantization is by reading a blog post about binary quantization. That seemed too good to be true. I asked Claude to design an analysis assessing the fidelity of 1, 2, 4, and 8 bit quantization. Claude did a good job, downloading 10,000 embeddings from a public source and computing a similarity score and correlation coefficient for each level of quantization against the float32 SoT. 1 and 2 bit quantizations were about 90% similar and 8 bit quantization was lossless given the precision Claude used to display the results. 4 bit was interesting as it was 99% similar (almost lossless) yet half the size of 8 bit. It seemed like the sweet spot.
This analysis took me all of an hour so I thought, "That's cool but is it real?" It's gratifying to see that 4 bit quantization is actually being used by professionals in this field.
4-bit quantization on newer nvidia hardware is being supported in training as well these days. I believe the gpt-oss models were trained natively in MXFP4, which is a 4-bit floating point / e2m1 (2-exponent, 1 bit mantissa, 1 bit sign).
It doesn't seem terribly common yet though. I think it is challenging to keep it stable.
mxfp4 is a block-based floating point format. The E2M1 format applies to individual values, but each 32-values block also has a shared 8-bit floating point exponent to provide scaling information about the whole block.
There's also work on ternary models that's quite interesting, because the arithmetic operations are super fast and they're extremely cache efficient. Well worth looking into if that's the sort of thing that interests you.
I do wonder where that extra acuity you get from 1% more shows up in practice.
I hate how I have basically no way to intuitively tell that because of how much of a black box the system is
Well why would Claude know any of this? Obviously it's the wrong criteria. If you have your own dataset to benchmark, created your own calibration for quantization with it. Scientifically, you wouldn't really believe in the whole process of gradient descent if you didn't think tiny differences in these values matter. So...
I think you might be answering to a different person or misunderstanding what I said but you are right that just as I don’t have an intuition for where the acuity shows up in the corpus, I don’t think Claude does either
I decided to try Qwen3.5 122B in LM Studio with Opencode and I am impressed. It's not super slow (M4 Max/128GB) and it's pretty close to how Claude Code feels. Getting pretty good code analysis, definitely feels Sonnet-esque. I'm hyped completely local alternatives are getting so good.
Getting better, but definitely not there yet, nor near Sonnet 4.5 performance.
What these open models are great for are for narrow, constrained domains, with good input/output examples. I typically use them for things like prompt expansion, sentiment analysis, reformatting or re-arranging flow of code.
What I found they have trouble with is going from ambiguous description -> solved problem. Qwen 3.5 is certainly the best of the OSS models I've found (beating out GPT 120b OSS which was the previous king), and it's just starting to demonstrate true intelligence in unbound situations, but it isn't quite there yet. I have a RTX 6000 pro, so Qwen 3.5 is free for me to run, but I tend to default to Composer 1.5 if I want to be cheap.
The trend however is super encouraging. I bought my vid card with the full expectation that we'll have a locally running GPT 5.2 equiv by EoY, and I think we're on track.
Smells like hyperbole. A lot of people making such claims don’t seem to have continued real world experience with these models or seem to have very weird standards for what they consider usable.
Up until relatively recently, while people had already long been making these claims, it came with the asterisks of „oh, but you can’t practically use more than a few K tokens of context“.
"Create a single page web app scientific RPN calculator"
Qwen 3.5 122b/a10b (at q3 using unsloth's dynamic quant) is so far the first model I've tried locally that gets a really usable RPN calculator app. Other models (even larger ones that I can run on my Strix Halo box) tend to either not implement the stack right, have non-functional operation buttons, or most commonly the keypad looks like a Picasso painting (i.e., the 10-key pad portion has buttons missing or mapped all over the keypad area).
This seems like such as simple test, but I even just tried it in chatgpt (whatever model they serve up when you don't log in), and it didn't even have any numerical input buttons. Claude Sonet 4.6 did get it correct too, but that is the only other model I've used that gets this question right.
We tend to find Qwen3-Coder-Next better at coding at least on our anecdotal examples from our codebases. It's somewhat better at tool calling, maybe the current templates for Qwen3.5 are still not enjoying as "mature" support as Qwen3 on vllm. I can say in my team MiniMax2.5 is the currently favorite.
if so, a better approach would be to ask it to first plan that entire task and give it some specific guidance
then once it has the plan, ask it to execute it, preferably by letting it call other subagents that take care of different phases of the implementation while the main loop just merges those worktrees back
Qwen3-Coder-30B-A3B-Instruct is good I think for in line IDE integration or operating on small functions or library code but I dont think you will get too far with one shot feature implementation that people are currently doing with Claude or whatever.
Presumably not, but the better approach is anyways to first plan using a powerful/expensive model like Opus, then you can use something less capable and cheaper for the coding part. This would be true even if you just want to use Anthropic models, but makes even more sense if you want to use something cheaper like Qwen3.5 or Kimi K2.5 for the coding part.
I have been adding a one shot feature to a codebase with ChatGPT 5.3 Codex in Cursor and it worked out of the box but then I realised everything it had done was super weird and it didn't work under a load of edge cases. I've tried being super clear about how to fix it but the model is lost. This was not a complex feature at all so hopefully I'm employed for a few more years yet.
I could be doing something wrong, but I have not had any success with one shot feature implementations for any of the current models. There are always weird quirks, undesired behaviors, bad practices, or just egregiously broken implementations. A week or so ago, I had instructed Claude to do something at compile-time and it instead burned a phenomenal amount of tokens before yeeting the most absurd, and convoluted, runtime implementation—- that didn’t even work. At work I use it (or Codex) for specific tasks, delegating specific steps of the feature implementation.
The more I use the cloud based frontier models, the more virtue I find in using local, open source/weights, models because they tend to create much simpler code. They require more direct interaction from me, but the end result tends to be less buggy, easier to refactor/clean up, and more precisely what I wanted. I am personally excited to try this new model out here shortly on my 5090. If read the article correctly, it sounds like even the quantized versions have a “million”[1] token context window.
And to note, I’m sure I could use the same interaction loop for Claude or GPT, but the local models are free (minus the power) to run.
[1] I’m a dubious it won’t shite itself at even 50% of that. But even 250k would be amazing for a local model when I “only” have 32GB of VRAM.
I used the 35b model to create a polars implementation of PCA (no sklearn or imports other than math and polars). In less than 10 minutes I had the code. This is impressive to me considering how poorly all models were with polars until very recently. (They always hallucinated pandas code.)
SWE chart is missing Claude on front page, interesting way to present your data. Mix and match at will.
Grown up people showing public school level sneakiness. That fact alone disqualifies your LL. Business/marketing leaders usually are brighter than average developers... so there.
Thinking about getting a new MBP M5 Max 128GB (assuming they are released next week). I know "future proofing" at this stage is near impossible, but for writing Rust code locally (likely using Qwen 3.5 for now on MLX), the AIs have convinced me this is probably my best choice for immediate with some level of longevity, while retaining portability (not strictly needed, but nice to have). Alternatively was considering RTX options or a mac studio, but was leaning towards apple for the unified memory. What does HN think?
Thermals. Your workloads will be throttled hard once it inevitably runs hot. See comments elsewhere in thread about why LLMs on laptops like MBP is underwhelming. The same chips in even a studio form factor would perform much better.
Strix Halo machines are a good option too if you are at all price sensitive. AMD (with all the downsides of that for AI work) but people are getting decent performance from them.
I have a Mac Studio with 128GB and a M4 Max and I'd recommend it. The power usage is also pretty good, but you may not care if you live somewhere where energy is cheap.
Have you used this for Rust coding by chance? I'm curious how it compares to Opus 4.6. I realize it isn't going to think to the same level, but curious how code quality is for a more straight forward task.
I've been mulling the same, but decided against (for now)
Using Claude Code Max 20 so ROI would be maybe 2+ years.
CC gives me unlimited coding in 4-6 windows in parallel. Unsure if any model would beat (or even match) that, both in terms in quality and speed.
I wouldn't gamble on that now. With a subscription, I can change any time. With the machine, you risk that this great insane model comes out but you need 138GB and then you'll pay for both.
A big part that a lot of local users forget is inference is hard. Maybe you have the wrong temperature. Maybe you have the wrong min P. Maybe you have the wrong template. Maybe the implementation in llama cpp has a bug. Maybe Q4 or even Q8 just won’t compare to BF16. Reality is, there’s so many knobs to LLM inferencing and any can make the experience worse. It’s not always the model’s fault.
Radeon R9700 with 32 GB VRAM is relatively affordable for the amount of RAM and with llama.cpp it runs fast enough for most things. These are workstation cards with blower fans and they are LOUD. Otherwise if you have the money to burn get a 5090 for speeeed and relatively low noise, especially if you limit power usage.
I have a pair of Radeon AI PRO R9700 with 32Gb, and so far they have been a pleasure to use. Drivers work out-of-the-box, and they are completely quiet when unused. They are capped at 300W power, so even at 100% utilization they are not too loud.
I was thinking about adding after-market liquid cooling for them, but they're fine without it.
It depends. How much are you willing to wait for an answer? Also, how far are you willing to push quantization, given the risk of degraded answers at more extreme quantization levels?
It's less than you'd think. I'm using the 35B-A3B model on an A5000, which is something like a slightly faster 3080 with 24GB VRAM. I'm able to fit the entire Q4 model in memory with 128K context (and I think I would probably be able to do 256K since I still have like 4GB of VRAM free). The prompt processing is something like 1K tokens/second and generates around 100 tokens/second. Plenty fast for agentic use via Opencode.
For anyone else trying to run this on a Mac with 32GB unified RAM, this is what worked for me:
First, make sure enough memory is allocated to the gpu:
sudo sysctl -w iogpu.wired_limit_mb=24000
Then run llama.cpp but reduce RAM needs by limiting the context window and turning off vision support. (And turn off reasoning for now as it's not needed for simple queries.)
I've had an AMD card for the last 5 years, so I kinda just tuned out of local LLM releases because AMD seemed to abandon rocm for my card (6900xt) - Is AMD capable of anything these days?
> I've had an AMD card for the last 5 years, so I kinda just tuned out of local LLM releases because AMD seemed to abandon rocm for my card (6900xt) - Is AMD capable of anything these days?
Sure. Llama.cpp will happily run these kinds of LLMs using either HIP or Vulcan.
Vulkan is easier to get going using the Mesa OSS drivers under Linux, HIP might give you slightly better performance.
I think the 27B dense model at full precision and 122B MoE at 4- or 6-bit quantization are legitimate killer apps for the 96 GB RTX 6000 Pro Blackwell, if the budget supports it.
I imagine any 24 GB card can run the lower quants at a reasonable rate, though, and those are still very good models.
Big fan of Qwen 3.5. It actually delivers on some of the hype that the previous wave of open models never lived up to.
No experience with 5 and not much with 4.7, but they both have quite a few advocates over on /r/localllama.
Unsloth's GLM-4.7-Flash-BF16.gguf is quite fast on the 6000, at around 100 t/s, but definitely not as smart as the Qwen 3.5 MoE or dense models of similar size. As far as I'm concerned Qwen 3.5 renders most other open models short of perhaps Kimi 2.5 obsolete for general queries, although other models are still said to be better for local agentic use. That, I haven't tried.
I've got the unsloth q4_K_XL 35b running in llama.cpp on an i9/64G/4090 machine doing double-digit tokens per second with a 90k+ token context window available. The model's completely in VRAM.
It is slow but usable via opencode on a mbp m3 max 48 gb. So I guess hosted is still the better option for most people.
The local models are considerably better relative to the hosted ones compared to 6 months ago. Bench maxing or not - stuff is happening in this area for sure.
The larger 3.5 quants are actually pretty close to the full-blown 397B model's performance, at least looking at the numbers. Qwen 3.5 seems more tolerant of quantization than most.
122B-A10B-UD-Q4-K-XL generated https://pastebin.com/j3ddfNtS -- but I can't get it to do anything in a couple of online interpreters. Guessing it wasn't trained on a lot of Brainfuck code.
Edit: it looks like the flagship models work by writing a C or Python program to do the bookkeeping. I don't have Qwen set up to use tools, and even Opus 4.6 shits the bed when told to do it without tools [1], so not too surprising that it didn't work.
1: https://claude.ai/share/1f5289ae-decd-4dfa-98fd-0d34346008c6 -- I interrupted it and told it not to use a C/Python program or any other tools to generate the Brainfuck code, and it gave me an error message after about 10 minutes that wasn't logged to the chat.
18GB was an odd 3-channel one-off for the M3 Pros. I guess there's a bunch of them out there, but how slow would 27B be on it, due to not being an MOE model.
That's like saying "somewhere between Eliza and Haiku 4.5". Haiku is not even a so-called 'reasoning model'.¹
¹ To preempt the easily-offended, this is what the latest Opus 4.6 in today's Claude Code update says: "Claude Haiku 4.5 is not a reasoning model — it's optimized for speed and cost efficiency. It's the fastest model in the Claude family, good for quick, straightforward tasks, but it doesn't have extended thinking/reasoning capabilities."
> Claude Haiku 4.5, a new hybrid reasoning large language model from Anthropic in our small, fast model class.
> As with each model released by Anthropic beginning with Claude Sonnet 3.7, Claude Haiku 4.5 is a hybrid reasoning model. This means that by default the model will answer a query rapidly, but users have the option to toggle on “extended thinking mode”, where the model will spend more time considering its response before it answers. Note that our previous model in the Haiku small-model class, Claude Haiku 3.5, did not have an extended thinking mode.
Not sure what this means, but as a marketing person myself, here's what happened: One day, an Anthropican involved in the Haiku 4.5 launch shrugged, weighed the odds of getting spanked for equating "extended thinking" with "reasoning", and then used Claude to generate copy declaring that. It's not rocket surgery!
It's mainly that people on here, regardless of profession, speak incorrectly but confidentally about things that could be easily verified with a Google search or basic familiarity with the thing in question.
Haiku 4.5 is a reasoning model, regardless of whatever hallucination you read. Being a hybrid reasoning model means that, depending on the complexity of the question and whether you explicitly enable reasoning (this is "extended thinking" in the API and other interfaces) when making a request to the LLM, it will emit reasoning tokens separately prior to the tokens used in the main response.
I love your theory that there was some mix up on their side because they were lazy and it was just some marketing dude being quirky with the technical language.
> It's mainly that people on here, regardless of profession, speak incorrectly but confidentally about things that could be easily verified with a Google search or basic familiarity with the thing in question.
Yep. And if your heart wants to call Haiku a "reasoning model", obviously you must listen. It doesn't meet that bar for me for a couple reasons: (1) It lacks both "adaptive thinking" and "interleaved thinking" (per Anthropic, both critical for reasoning models), and (2) it also performed unacceptably with a real-world collection of very basic reasoning tasks that I tried using it for.¹ I'm glad you're having better luck with it.
That said, it's a great and affordable little model for what it was designed for!
¹ I once made the mistake of converting a bunch of skills (which require basic reasoning) to use Haiku for Axiom (https://charleswiltgen.github.io/Axiom/). It failed miserably, and wow, did users let me have it. On the bright side, as a result I'm now far better at testing models' ability to reason.
We are all reasonable people here, and while you are (mostly) correct, I think we can all agree that Anthropic documentation sucks. If I have to infer from the doc:
* Haiku 4.5 by default doesn't think, i.e. it has a default thinking budget of 0.
* By setting a non-zero thinking budget, Haiku 4.5 can think. My guess is that Claude Code may set this differently for different tasks, e.g. thinking for Explore, no thinking for Compact.
* This hybrid thinking is different from the adaptive thinking introduced in Opus 4.6, which when enabled, can automatically adjust the thinking level based on task difficulty.
An AMD AI max+ 395 - I use the one from frame.work (https://frame.work/de/en/desktop) with 128GB unified RAM and it can run a 120b model (gpt-oss:120b) just fine.
No it does not. None of these models have the “depth” that the frontier models have across a variety of conversations, tasks and situations. Working with them is like playing snakes and ladders, you never know when it’s going to do something crazy and set you back.
I would say 27B matches with Sonnet 4.0, while 397B A17B matches with Opus 4.1. They are indeed nowhere near Sonnet 4.5, but getting 262144 context length at good speed with modest hardware is huge for local inference.
You mean 35B A3B? If this is shit, this is some of the best shit out I've seen yet. Never in a million years did I think I'd have an LLM running locally, actually writing code on my behalf. Accurately too.
Our shop cannot use cloud models for sensitive data and code. For shops like ours, we continue to be impressed and appreciative of the progress in open-source / self-hosted models.
In practice I have not seen this. Sonnet is incredible performance. No open model is close. Hosted open models are so much worse that I end up spending more because of inferior intelligence.
One highly annoying facet of the hardware is that AND's support for the NPU under linux is currently non-existent. which abandons 50 of the 126 TOPS stated of AI capability. They seem to think that Windows support is good enough. Grrrrrr.
I asked it to recite potato 100 times coz I wanted to benchmark speed of CPU vs GPU. It's on 150 line of planning. It recited the requested thing 4 times already and started drafting the 5th response.
Qwen3.5 pretty much requires a long system prompt, otherwise it goes into a weird planning mode where it reasons for minutes about what to do, and double and triple checks everything it does. Both Gemini's and Claude Opus 4.6's prompts work pretty well, but are so long that whatever you're using to run the model has to support prompt caching. Asking it to "Say the word "potato" 100 times, once per line, numbered.", for example, results in the following reasoning, followed by the word "potato" in 100 numbered lines, using the smallest (and therefore dumbest) quant unsloth/Qwen3.5-35B-A3B-GGUF:UD-IQ2_XXS:
"User is asking me to repeat the word "potato" 100 times, numbered. This is a simple request - I can comply with this request. Let me create a response that includes the word "potato" 100 times, numbered from 1 to 100.
I'll need to be careful about formatting - the user wants it numbered and once per line. I should use minimal formatting as per my instructions."
good to know, thanks. I just ran ollama with qwen3.5:27b. Currently it's stuck on picking format
Let's write.
Wait, I'll write the response.
Wait, I'll check if I should use a table.
No, text is fine.
Okay.
Let's write.
Wait, I'll write the response.
Wait, I'll check if I should use a bullet list.
No, just lines.
Okay.
Let's write.
Wait, I'll write the response.
Wait, I'll check if I should use a numbered list.
No, lines are fine.
Okay.
Let's write.
Wait, I'll write the response.
Wait, I'll check if I should use a code block.
Yes.
Okay.
Let's write.
Wait, I'll write the response.
Wait, I'll check if I should use a pre block.
Code block is better.
Yeah, it tends to get stuck in loops like that a lot with everything set to default. I wonder if they distilled Gemini at some point, I've seen that get stuck in a similar "I will now do [thing]. I am preparing to do [thing]. I will do it." failure mode as well a couple of times.
I don't quite get the low temperature coupled with the high penalty. We get thinking loop due to low temperature, and we then counter it with high penalty. That seems backward.
For Qwen3.5 27B, I got good result with --temp 1.0 --top-p 1.0 --top-k 40 --min-p 0.2, without penalty. It allows the model to explore (temp, top-p, top-k) without going off the rail (min-p) during reasoning. No loop so far.
The guidelines are a little hard to interpret. At https://huggingface.co/Qwen/Qwen3.5-27B Qwen says to use temp 0.6, pres 0.0, rep 1.0 for "thinking mode for precise coding tasks" and temp 1.0, pres 1.5, rep 1.0 for "thinking mode for general tasks." Those parameters are just swinging wildly all over the place, and I don't know if printing potato 100 times is considered to be more like a "precise coding task" or a "general task."
When setting up the batch file for some previous tests, I decided to split the difference between 0.6 and 1.0 for temperature and use the larger recommended values for presence and repetition. For this prompt, it probably isn't a good idea to discourage repetition, I guess. But keeping the existing parameters worked well enough, so I didn't mess with them.
well hold on now, maybe it’s onto something. do you really know what it means to “recite” “potato” “100” “times”? each of those words could be pulled apart into a dissertation-level thesis and analysis of language, history, and communication.
either that, or it has a delusional level of instruction following. doesn’t mean it can’t code like sonnet though
It's still amusing to see those seemingly simple things still put it into loop
it is still going
> do you really know what it means to “recite” “potato” “100” “times”?
asking user question is an option. Sonnet did that a bunch when I was trying to debug some network issue. It also forgot the facts checked for it and told it before...
I wonder how much certain models have been trained to avoid asking too many questions. I’ve had coworkers who’ll complete an entire project before asking a single additional question to management, and it has never gone well for them. Unsurprising that the same would be true for the “managing AI” era of programming.
The thing I struggle most with, honestly, is when AI (usually GPT5.3-Codex) asks me a question and I genuinely don’t know the answer. I’m just like “well, uh… follow industry best practice, please? unless best practice is dumb, I guess. do a good. please do a good.” And then I get to find out what the answer should’ve been the hard way.
Nothing personally - Our customers send us highly sensitive financial documents to process. Using a foreign model to process their data (or even just for local testing) will most likely result in a u-turn.
What if you run them locally, or use a US-based provider that hosts them? IMO the provenance of the weights doesn't matter. You're right that the location of the hoster does, though.
No, it's not. They're just collections of numbers that can be harnessed to produce outputs. I check the outputs and if they're good I use them. If they're not, I ignore them and there's no harm done. Obviously I don't trust them to be accurate sources of information, but I don't trust American corporate LLMs much more.
It's not only "non-Chinese" to think about here. There's nobody really touching Qwen in the single-GPU size class and there hasn't been for a couple of generations.
All the western ones are closed while all the Chinese ones are open. The only exception is the European Mistral but performance of that model is not very satisfactory. Hopefully they make some improvements soon
They are trained to respond to certain topics in a way that does not align with real world evidence. Pretty much the opposite of what you want in such a tool.
This is trivial to test and verify yourself. Just pick any topic you think has a chance of being censored. You can do the same on American models and compare results.
from my personal experience, qwen 30b a3b understand command quiet well as long as the input is not big enough that ruin the attention (I feel the boundary is somewhere between 8000 or 12000?). But that isn't really bug of model itself though. A smaller model just have shorter memory, it's simply physical restriction.
I made a mixed extraction, cleaning, translation, formatting task on job that have average 6000 token input. And so far, only 30b a3b is smart enough not miss job detail (most of time)
I later refactor the task to multi pass using smaller model though. Make job simpler is still a better strategy to get clean output if you can change the pipeline.
Impressive, very nice, now let's see what would be the odds that the US models developed in SV are also highly positive about Californian and Democrats politics.
Screaming whataboutism is the only way people know to avoid answering an obvious fact. That LLMs have the biases of the governments where the came from doesn't matter if it's China, US, India, EU, etc
I've been playing with Qwen3-Coder-Next and the Qwen3.5 models since they were each released.
They are impressive, but they are not performing at Sonnet 4.5 level in my experience.
I have observed that they're configured to be very tenacious. If you can carefully constrain the goal with some tests they need to pass and frame it in a way to keep them on track, they will just keep trying things over and over. They'll "solve" a lot of these problems in the way that a broken clock is right twice a day, but there's a lot of fumbling to get there.
That said, they are impressive for open source models. It's amazing what you can do with self-hosted now. Just don't believe the hype that these are Sonnet 4.5 level models because you're going to be very disappointed once you get into anything complex.
reply