Hacker Newsnew | past | comments | ask | show | jobs | submit | ycui1986's commentslogin

So, dual RTX PRO 6000

I really like the pro version. The pelican is so cute.

32GB RAM on mac also need to host OS, software, and other stuff. There may not even be 24GB VRAM left for the model.


just because a bunch of rockets went up without blowing up, does not mean they are profitable. it cost money to shot rocket, and it is very expensive, reusable or not. most launches are internal launch without external paying customers.


another 60 billion to save a failed AI endeavor.


outputting docx files does not have much to do with model capability. it is about whether tool calling has be configured .


There are also many Chines AI-target GPU/NPU producers. You can get a hold of some boards on taobao.com. They are usable in some way.

No, nVidia and AMD are not the only ones benefiting.


i give it in real ubuntu, no vm, no docker. so long I don't ask it to organize files, it will behave. it has not screw me so far.


Godspeed


I only run it with --dangerously-skip-permissions. YOLO!


qwen3.5 and qwen3.6 are both good at tool calling.


For many LLM load, it seems ROCm is slower than vulkan. What’s the point?


Compatibility so foundation packages like torch onnx-runtime can run on AMD GPU without massive change in architecture. It's the biggest reason for those stuff that "only works on nvidia gpu". It's not faster if vulkan alternative exists, but at least it runs.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: