Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

A Strix Halo with 128GB unified memory is less than $2k and the more suitable alternative to a mac. I'm pretty happy with my device (Bosgame M5).
 help



the macs outperform it and I figure it's a better general purpose computer than strix halo. if budget is a problem, then a strix halo is a decent alternative.

Well a mac isn't really an alternative to a mac, or is it? ;)

Personally I'm not interested in having a mac as I work with linux. And yes, they outperform them, but only if you ignore the price. When comparing what you get for ~$2k, a Strix Halo is miles ahead.


Mac doesn't run Linux so in my books is a worse general purpose computer than a Strix Halo box.

A Strix Halo with 128GB unified memory is less than $2k

Where did you get that price? Wherever I looked it's around 3k euros which is around $3.5k


Directly from Bosgame.com, for ~1.7k€ in December. I see it's at $2.2k / 1.9k€ now.

why haven't I checked their site first is beyond me :) Thank you for this! You say you're satisfied, right?

Can you elaborate more on your use cases, models, setup,...?

I took my setup from here: https://github.com/kyuz0/amd-strix-halo-toolboxes

Still lot to learn, but after a while you have something like Qwen3-Coder-Next-Q8_0 running and - at least for me - it works quite well, both as ChatGPT like chat-interface using llama.cpp and as coding agent


I'm not really using them for coding (only played a little bit with minimax2.1), which is probably the most common use case here.

I mainly use them for deep work with texts and deep research. My main criterion is privacy, both for legal reasons (I'm in the EU and can't and don't want to expose customer's data to non-gdpr-compliant services) and wouldn't use US services personally either, e.g. I would never explore health related topics chatgpt or gemini for obvious reasons.

Technically I've set it up in my office with llama.cpp and have exposed that (both chat interface and openai compatible api) with a simple wireguard tunnel behind nginx and http auth. Now I can use it everywhere. It's a small, quiet and pretty fast machine (compiling llama.cpp is around 20 seconds?), I quite like it.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: