Hacker News
new
|
past
|
comments
|
ask
|
show
|
jobs
|
submit
login
bottlepalm
3 days ago
|
parent
|
context
|
favorite
| on:
We Will Not Be Divided
You're spreading fud. There's nothing you can run locally that's on par with the speed/intelligence of a SOTA model.
help
CamperBob2
3 days ago
|
next
[–]
Incorrect as of a couple of days ago, when Qwen 3.5 came out. It's a GPT 5-class model that you can run at full strength on a small DGX Spark or Mac cluster, and it still works pretty well after quantization.
reply
3836293648
3 days ago
|
prev
[–]
You may be correct about the level of models you can actually run on consumer hardware, but it's not fud and you're being needlessly aggressive here.
reply
Guidelines
|
FAQ
|
Lists
|
API
|
Security
|
Legal
|
Apply to YC
|
Contact
Search: