If you only need it occasionally doesn’t subscription make sense? Just pay for the months you need it.
I’m cautious of adding subscription products i would depend on to my tools but if it’s something I definitely only need once a year I just buy a month of it.
Although $30/mo is a bit much for what it does. So if they did go one off presumably it would be about $500 a license.
Apple let someone in India a place I have never been to, Apple knows I've never been to log into an old Apple account I'd forgotten about and hadn't logged into for 12 years with a password from a leak. All I got was "Your apple account has been linked to a new mac in India".
Disgusting to me that even the most basic of logic for what would be someone stealing an account: has the account been used in years, would this person we have location data for ever be in India setting up a new computer, with a computer type ID we know is compromised to hackintoshes (iMac Pro) wasn't enough of a red flag to send me an email confirmation first.
Luckily the account was so old iCloud barely stored anything back then but still shocking to me.
Yeah Sam Altman's kids will use chatbots but here's the difference, your kids no matter the amount of money you're willing to spend will never ever get to use the chatbots Sam Altman's kids will have access to to build their legacy.
Quality just looked so poor too it honestly felt 90s quality on some of the feeds, will be criminal if they don’t sort their cameras out before the moon landing
I don't deny that node/npm is useful for building servers, devtools for JS development itself, etc. but as an end user I haven't encountered anything useful which requires having it on my machine.
> But inference is unique because its performance scales with high memory throughput, and you can’t assemble that by wiring together off the shelf parts in a consumer form factor.
Nvidia outperforms Mac significantly on diffusion inference and many other forms. It’s not as simple as the current Mac chips are entirely better for this.
You don’t need it if you use llamacpp on Windows, or if you compile it on Linux with CUDA 13 and the correct kernel HMM support, and you’re only using MoE models (which, tbh, you should be doing anyways).
What MoE has to do with it? Aside from Flash-MoE that supports exactly one model and only on macOs - you still need to load entire model into memory. You also don't know what experts going to be activated, so it's not like you can predict which needs to be loaded.
With proper mmap support you don't really need the entire model in memory. It can be streamed from a fast SSD, and this is more useful for MoE models where not all expert-layers are uniformly used. Of course the more data you stream from SSD, the slower this is; caching stuff in RAM is still relevant to good performance.
Okay, yes, you don’t need the entire MoE model in memory for it to function.
But you still need the working set of frequently used experts to actually fit in RAM, or at least stay cached. Expert routing happens per token, per layer. If those weights aren’t resident, you’re effectively pulling them from disk on the critical path of generation — over and over again.
That’s not “just slower,” that’s order of magnitude slower. You’ll end up with constant page faults and page cache churn. And if swap is on the same device as the model, you’re now competing for bandwidth on top of that.
IMO the main benefit of mmap is ability to reclaim cold pages during high memory-pressure events when model isn't active.
I think the advantage of Flash-MoE compared to plain mmap is mostly the coalesced representation where a single expert-layer is represented by a single extent of sequential data. That could be introduced to existing binary formats like GGUF or HF - there is already a provision for differently structured representations, and that would easily fit.
It’s been going on for a while. Search YouTube or the web for 48gb 4090 (this is one of the most popular modded Nvidia cards), Nvidia of course never officially made a 4090 with this much memory.
There are some on sale via eBay right now. The memory controllers on some Nvidia gpus support well beyond the 16-24gb they shipped with as standard, and enterprising folks in China desolder the original memory chips and fit higher capacity ones.
Give that most of mine, and probably yours, and probably most of the world's computers are in fact made in China one way or another, some higher percentage than others, I'm guessing most of us trust our hardware enough to continue using it.
True. I was specifically referring to "modded Chinese hardware" from some unknown, unvetted third party versus say through a well-known brand that hopefully has its own rigorous QA and security processes in place.
I wouldn't say that's true or even likely. It's completely possible to be in a pit of vipers where every single snake is venomous, and that is pretty much what we are seeing: With technological advances, there is a certain subset of people that will use them primarily to solidify their power and control over others. There is no utopian society right now whose government doesn't look to spy through technology, which of course is best set up at time of manufacture.
Agreed. Unless you have full control over the production chain to fully produce a device, you are subject to the whims and desires of those who preside over such technological feats that we take for granted in our daily lives.
To the original point, it's safe to say that highlighting a nationality with regards to trust is baseless and without merit, as would be for any other topic (men/women from x are y, z food is better here, etc..). Real life is much more complicated and nuanced past nationalities. Some might call it FUD (fear, uncertainty and doubt) but there's always a deeper rationale at the individual level as well.
Rather than people being wary of Chinese in general, it's more that there is a high degree of government control exercised in China and they are known to be very strategic with long-term planning in regards to technology control both for spying and actual remote control of devices. We are all just looking for the least bad option. It's not like devices from other countries are immune, but they are often less organized so there is a better chance of avoiding the Chinese level of planned access.
It does seem like pretty low risk in this specific case so I agree OP's comment was bit over the top, but I would have no way to make anything resembling even an educated guess as to how far their programs go.
Yes, this is really what I was referring to. And the fact that the original comment I was replying to mentioned "modded Chinese hardware" from some unspecified, unvetted 3rd party which doesn't exactly fill me with confidence.
Sadly, memory bandwidth is abysmal compared to Apple chips - 273 GB/s vs 614 GB/s on M5 Max for similar price. Even though fp4 compute is faster, it doesn't help for all the decode heavy agentic workflows.
You can still buy used 3090 cards on ebay. 5 of them will give you 120GB of memory and will blow away any mac in terms of performance on LLM workloads. They have gone up in price lately and are now about $1100 each, but at one point they were $700-800 each.
FWIW I have never used NVLink, and I’m not sure why people are bringing up “daisy chaining” because as far as I’m aware that is not a thing with modern GPUs at all.
> The mac will just work for models as large as 100B, can go higher with quantized models. And power draw will be 1/5th as much as the 3090 setup.
This setup will work for 100B models as well. And yes, the Mac will draw less power, but the Nvidia machine will be many times faster. So depending on your specific Mac and your specific Nvidia setup, the performance per watt will be in the same ballpark. And higher absolute performance is certainly a nice perk.
> You can certainly daisy chain several 3090's together but it doesn't work seamlessly.
Citation needed; there's no "daisy chaining" in the setup I describe, and low level libraries like pytorch as well as higher level tools like Ollama all seamlessly support multiple GPUs.
1800W is the max on a 15A circuit, but yes, it’s usually under 1600W. For LLM inference, limiting the TDP to 225W or so per card saves a lot of power, for a 5% drop in performance.
> I think it's bad form to say "citation needed" when your original claim didn't include citations.
I apologize, but using multiple GPUs for inference (without any sort of “daisy chaining”) is something that’s been supported in most LLM tooling for a long time.
> Regardless - there's a difference between training and inference.
No one brought up training vs. inference to my knowledge, besides you — I was assuming the machine was for inference, because my experience building a machine like the one I described was in order to do inference. If you want to train models, I know less about that, but I’m pretty sure the tooling does easily support multiple GPUs.
> And pytorch doesn't magically make 5 gpus behave like 1 gpu.
I never said it was magic, I just said it was supported, which it is.
Where are you gonna find Apple hardware with 128GB of memory at enthusiast-compatible price?
The cheapest Apple desktop with 128GB of memory shows up as costing $3499 for me, which isn't very "enthusiast-compatible", it's about 3x the minimum salary in my country!
Seems I misunderstood what a "enthusiast" is, I thought it was about someone "excited about something" but seems the typical definition includes them having a lot of money too, my bad.
I'm an immigrant to Canada, and yes, English has both literal meanings and colloquial meanings.
In the most literal meaning, absolutely, "Enthusiast" just means a person who likes something, is excited about something.
When it comes to market and products though, typically you'll see the word "Enthusiast" as mid-tier - something like: Consumer --> Enthusiast --> Professional (may have words like "Prosumer" in there as well etc:)
In that context, which is typically the one people will use when discussing product pricing and placement, "Enthusiast" is somebody who yes enjoys something, but does it sufficiently to be discerning and capable of purchasing mid-tier or above hardware.
So while a consumer photographer, may use their phone or compact or all-in-one camera, enthusiast photographer will probably spend $3000 - $5000 in camera gear. Equivalently, there are myriad gamers out there (on phones, consoles, Geforce Now, whatever:), an enthusiast gamer is assumed to have a dedicated gaming computer, probably a tower, with a dedicated video card, likely say a 5070ti or above, probably 32GB+ RAM, couple of SSDs which are not entry level, etc.
Again, this is not to say a person with limited budget is "not a real enthusiast", no gatekeeping is intended here; simply, if it may help, what the word means when it comes to market segmentation and product pricing :)
Additionally, "enthusiasts"/"hobbyists" tend to be willing to spend beyond practical utility, while professionals are more interested in pragmatism, especially in photography from what I can tell.
If you're an actual pro, you need your stuff to work properly, efficiently, reliably, when it's called for. When you're a hobbyist, it's sometimes almost the goal to waste money and time on stuff that really doesn't matter beyond your interest in it; working on the thing is the point, not the value it generates. Pros should spend money on good tools and research and knowledge, but it usually needs to be an investment, sometimes crossing over with hobbyist opinions.
A friend of mine who's a computer hobbyist and retail IT tech, making far far less than I do, spends comically more than me on hardware to play basically one game. He keeps up to date with the latest processors and all that stuff, he knows hardware in terms of gaming. I meanwhile—despite having more money available—have a fairly budget gaming PC that I did build myself, but contains entirely old/used components, some of which he just needed to get rid of and gave me for free, and I upgrade my main mac every 5 years or something. I only upgrade when hardware is really getting in my way.
>> So while a consumer photographer, may use their phone or compact or all-in-one camera, enthusiast photographer will probably spend $3000 - $5000 in camera gear.
It's interesting that you chose photographers as the example here. In many cases that I've seen, enthusiast photographers spend much more than professional photographers on their gear because the photographers make their money with their gear and therefore need to justify it, while the enthusiasts are often tech people, successful doctors, etc., who spend lots and lots on money on their hobbies...
In any case, your point stands, that "enthusiast" computer users would easily spend $3-4K or more on gear to play games, train models, etc.
$3.5k is a lot of money, but not a ton by American hobby standards. It's easy to spend multiples, even orders of magnitude more than that on hobbies like fishing, wine, sports tickets, concerts, scuba, travel, being a foodie, golf, marathons, collectibles, etc.
It's out of reach for lots of people, even in developed countries. But it's easily within reach for loads of people that care more about computing than other stuff.
I live in America, I am very well compensated. Have been for 15 years now. $3500 is a lot of money. A lot. There is a tiny bubble of us tech folks who think it is accessible to most people. It is not. It is also the same reason Macs are still a niche. Don't take your circles to be the standard, it is very very far from it, especially if you think $3500 is not a lot of money.
It is easy to confirm this, just look at the sales number of these $3500 devices. It is definitely not an enthusiast price point, even in the US.
It's not nothing for most people... it's more than a month of rent/mortgage for a significant number of Americans even. But if it's your primary hobby, it's not completely out of reach, and it's not something you necessarily spend every year. A lot of people will upgrade to a new computer every 3-5 years and maybe upgrade something in between those complete system upgrades.
I know plenty of people who don't make a lot of money (say top 25% or so) that will have a Boat or RV that costs more than a $3500 computer, and balk at the thought of spending that much on a computer. It just depends on where your interests are.
The first words I said: "$3.5k is a lot of money..."
There are tens of millions of top 10% income adults in America. So something can be both unaffordable to most people, and also easily accessible to very many people.
It’s a midrange to upper expense in the US if it’s your hobby. Most people don’t have a serious computer hobby but they golf, trade ATVs, travel, drink, etc.
Mac has about 15% of the market share in the US. It's not really a niche.
$3500 is more than I would spend on a hobby too, but there are, in absolute terms, a large number of Americans who can spend this much on their hobbies.
There is no Apple device priced above $3k that has done 1 million in annual sales. The US population is >300M. <0.3% of the population. Don't take your bubble to be representative of society. $3500 is a lot of money, even in the US.
$3500 would have been 3–4 months' discretionary spending as a PhD student in Finland 15 years ago. A sum you might choose to spend once a year on something you find genuinely interesting.
Some people succumb to lifestyle creep or choose it deliberately. Others choose to live below their means when their income grows. The latter have a lot more money to spend on extras, or to save if that's what they prefer.
In June 1977, the base Apple II model with 4 KB of RAM was $1,298 (equivalent to about $6,900 in 2025), and with the maximum 48 KB of RAM it was $2,638 (equivalent to about $14,000 in 2025).
Wow, 48k for $14000. Now you can get a MBP with a million times more memory for $3500 or so. Whereas that CPU was clocked at 1 MHz, so CPUs are only several thousand times faster, maybe something like 30,000 times faster if you can make use of multi-core.
I'd argue that some of those are more consumption and activity than hobby depending on how they're engaged with, and that people use the word "hobby" too loosely, but would agree that Americans in-particular consume at obscene rates.
Golf equipment, mountaineering equipment, skiing and snowboarding lift tickets and gear, a single excessive graphics card that's only used for increasing frame rates marginally, or basically a single extra feature on a car, are all things that accumulate quite quickly. Some are clearly more superfluous than others and cater to whales, while some are just expensive by nature and aren't attempting to be anything else
Those are the prices for just buying equipment, which at least retain some kind of value. 3 million+ American kids are enrolled in competitive soccer with annual clubs dues between $1K and $5K, and that money is just gone at the end of the year. Basically none of those kids are going to have a career in soccer, so it's clearly a hobby, and everyone knows it. And soccer isn't even the most popular sport!
An enthusiast in the hobby space is by definition someone willing to pour much more money that someone else not that enthusiast in whichever hobby we are talking about.
Well, and also has a bunch of money, not just willing. I guess locally we don't really have that difference, as two other commentators here went by, that's why I had to update my local understanding of "enthusiast". Usually we use it for how engaged/interested a person is, regardless of how much money they can or are willing to use.
Learned something new today at least, so that's cool :)
Yes, when tech gear is sold as 'enthusiast' gear, it is almost invariably the most expensive non-professional tier of equipment. That is roughly the common understanding: Expensive and focused on features more than security required for public use; while remaining within reach of at least some individuals, not only corporations.
For an individual making median income in the US, it would cost 2% of your income to get a machine like this every 4-5 years. That's a matter of enthusiasm, not a matter of having a lot of money. Sorry that income is less where you are, but the people talking about the product tier are using American standards.
Enthusiast compute hardware doesn't cater to the people on the minimum salary in any country, let alone developing nations. When Ferrari makes a car they don't ask themselves if people on minimum salary will be able to afford them.
In in the bottom two poorest EU member states and Apple and Microsoft Xbox don't even bother to have a direct to customer store presence here, you buy them from third party retailers.
Why? Probably because their metrics show people here are too poor to afford their products en-masse to be worth operating a dedicated sales entity. Even though plenty of people do own top of the line Macbooks here, it's just the wealthy enthusiast niche, but it's still a niche for the volumes they (wish to)operate at. Why do you think Apple launched the Mac Neo?
Right, I think maybe we're then talking about "upper class enthusiasts" or something in reality then? I understood that to juts be about the person, not what economic class they were in, maybe I misunderstood.
>Right, I think maybe we're then talking about "upper class enthusiasts" or something in reality then?
Why? Enthusiasts are by definition people for whom value for money is not the main driver but top performance and cutting edge novelty at any cost. Affording enthusiast computer hardware is not a human right same how affording a Lamborghini or McMansion isn't.
But you don't need to buy a Lamborghini to do your grocery shopping or drive your kids to school, same how you don't need an Nvidia 5090 or MacBook Pro Max to do your taxes or do your school work.
So the definition is fine as it is. It's hardware for people with very deep pockets, often called whales.
Enthusiast in this contest more or less means you are excited enough about something to get a level above what normal people should get and just below professional pricing. An enthusiast camera body can be 2000 euros.
I would say an enthusiast computer is 2-4k.
It really depends what you meant with minimum salary (yearly?) because paying 3 months of salary for a computer like that isn't far fetched. You're not using this to generate recipes for cookies. An enthusiast level car is expensive as well.
I spent aaround that on my current personal desktop... 9950X, 2x48gb ddr5/@6000, RX 9070XT, 4tb gen 5 nvme + 4tb gen 4 nvme. I could have cut the cpu to a 9800x3d and ram to 32gb with a different GPU if my needs/usage were different. I'm running in Linux and don't game too much.
That said, a higher end gaming setup is going to cost that much and is absolutely in the enthusiast realm. "enthusiast" doesn't mean compatible with "minimum wage"
This has changed since Sam Altman started buying up all the chip supply, raising prices on memory, storage, and GPUs for everyone, but it used to be the case that you could build a PC that was both cheaper and faster than a Mac for LLM inference, with roughly equal performance per watt.
You would use multiple *90-series GPUs, throttled down in terms of power. Depending on the GPU, the sweet spot is between 225-350W, where for LLM workloads you only lose 5-10% of performance for a ~50% drop in power consumption.
Combined with a workstation (Xeon/Epyc) CPU with lots of PCIe, you can support 6-7 such GPUs (or more, depending on available power). This will blow away the fastest Mac studio, at a comparable performance per watt.
Again, a lot of this has changed, since GPUs and memory are so much more expensive now.
Macs are great for a simpler all in one box with high memory bandwidth and middling-to-decent GPU performance, but they are (or were) absolutely not "untouchable."
I think OP’s point was that it would do more than 2-3x the workload, thus them stating “blow it out of the water” and specifying “performance-per-watt”.
Untouchable my ass. You get a PC that has an ssd glued to the motherboard so if you run write intensive workloads and that thing wears out replacing it will have significant cost. Then there’s no PCie slot to get any decent network card if you want to work more than one of them in unison, you’re stuck with that stupid thunderbolt 5 while Infiniband gives x10 network speeds. As for memory bandwidth, it’s fast compared to CPUs but any enterprise GPU dwarfs it significantly. The unified RAM is the only interesting angle.
Apple could have taken a chunk of the enterprise market now with that AI craze if they had made an upgradable and expandable server edition based on their silicon. But no, everything has to be bolt down and restricted.
> Nvidia's recent GPUs are more power-efficient than Apple Silicon in raster, training and inference workloads.
I think you can do better than the proverbial Apples and Oranges comparison.
In terms of total system, "box on desk", Apple is likely to remain the performance per watt leader compared to random PC workstations with whatever GPUs you put inside.
A 128GB 2TB Dell Pro Max with Nvidia GB10 is about $4200, a Mac Studio with 128GB RAM and 2TB storage is $4100. So pretty comparable. I think Dell's pricing has been rocked more by the RAM shortage too.
Unfortunately the GB10 is incredibly bandwith starved. You get 128gb ram, but only 270GB/s bandwidth. The M3 Ultra mac studio gets you 820GB/s. (The M4 max is at 410GB/s. I'm not aware of any workload that gets the GB10 to it's theoretical peakflops.
From the spec sheets I’m looking at, it is not. I’m seeing models of the Dell Pro Max with 128 GB of DDR5-6400 as CAMM2, then a separate memory of up to 24 GB on the GPU. CAMM2 does not make the memory unified.
You're not looking at the right thing. Dell's naming is horrible. Dell Pro Max with GB10 (https://www.dell.com/en-us/shop/cty/pdp/spd/dell-pro-max-fcm...). It's a very different computer than what you're looking at and has 128GB LPDDR5X unified memory.
AFAIK, for the unified bandwidth, it depends mostly on the CPU, for M4 Max (I think it's the default today?) it does ~550 GB/s, while GB10 does ~270 GB/s, so about a 2x difference between the two. For comparison, RTX Pro 6000 does 1.8 TB/s, pretty much the same as what a 5090 does, which is probably the fastest/best GPUs a prosumer reasonable could get.
No, that's why Apple uses Performance Per Watt not actual performance celling as the metric. In actual workloads where you'd need this power then actual performance is what matters not PPW.
Probably comparable, but that's only with business-grade products, it's why Apple's current silicon is so remarkable on the market at the consumer level.
It has a HDMI port and its USB-C ports also support display out. But I believe most who buy it intend to use it headless. The machine runs Ubuntu 24.04 and has a slightly customised Gnome (green accents and an nvidia logo in GDM) as its desktop.
reply