Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I will, right now.

EDIT: opencode was a bit slow with qwen3.5:35b using Ollama. Faster/nicer to use with Liquid lfm2:latest

 help



Try llama.cpp - it usually excels with these MoE models imho.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: