Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

What really matters (for people that do the CAPEX and OPEX math on their assets; not gamers) is the performance/power ratio. Without this I don't see AMD eating much of Intel's lunch (35B vs 208B market cap).


The Zen 2 16 core chip is 105 watt TDP. The chips its wiping the floor with are 165 TDP. TDP doesn't necessarily correlate with real world usage, but benchmarks show that AMD is much better at their chips running closer to TDP than Intel chips are, so the gap is probably actually wider. The strength of Intel chips is being able to pump a lot of power through them to hit higher clock rates.

It sounds like you're saying performance/power is a benefit for Intel, possibly based upon the history of AMD chips, but that line of thought has been wrong since the Ryzen architecture.


> benchmarks show that AMD is much better at their chips running closer to TDP than Intel chips are, so the gap is probably actually wider

AMD gives their TDP with enabled turbo (similar to real usage), Intel gives TDP at rest / no turbo enabled.

There is still some variance from both between given and real TDP, but the core of the difference is well assumed, and dates back to almost a dozen CPU generations back when Intel already had to guzzle power like crazy to superclock their chips in the vague hope that they could compete with AMD's products of the time (and then they never reverted it once they took the lead back with the core architecture)

It's kind of similar to the whole "Intel wants comparison dont with SMT off", due to the last 15 years being theirs, the whole thing is biased toward Intel, ... yet they still massively lose those comparison.


This is accurate, several "95W" TDP intel chips will happily guzzle upwards of 200W+ for sustained periods (providing they don't down clock due to heating)


No this is absolutely not accurate. This only happens due to motherboard defaults running all cores at turbo speeds simultaneously as well as automatic overclocking behaviors on by default.


The reasoning doesn't matter, what matters is what the average consumer sees. If most consumer motherboards do it "wrong", Intel should use those numbers instead of the less common, but "correct" case.


And almost every benchmark will run on one of those motherboards, or do you have a list of curated benchmarks where they were done with TDP limitations per spec?


This is wrong, intel gives their TDP with CORRECT turbo enabled. The problem is that nearly every popular motherboard out there enables turbo on all cores simultaneously ("enhanced multicore" for example on ASUS boards) which blows the TDP out of the spec massively.

As a Small Form Factor enthusiast, I can attest to this with utmost confidence. The chips will run at their expected TDP when configured as specified by the factory, that's just not the default on almost any enthusiast board from known companies. In the case of ASUS it can actually be a bit of a battle to get things to run as intel specifies, both with MCE and automatic overclocking behaviors.


> The problem is that nearly every popular motherboard out there enables turbo on all cores simultaneously ("enhanced multicore" for example on ASUS boards) which blows the TDP out of the spec massively.

If that's the case, then also the performance is "massively blown out", since essentially all the benchmarks around are based on popular motherboards.

Anantech did a test some time ago with a real, fixed, 95 W TDP[¹], and it ain't pretty.

It's definitely good for Intel that "every popular motherboard" is, uh, guilty of going out of spec, otherwise, the popular opinion of Intel chips would be significantly lower.

Regardless, I'm also not really convinced that this can be considered "cheating" by the motherboards. According to the official Intel page [²]:

> The processor must be working in the power, temperature, and specification limits of the thermal design power (TDP)

so ultimately, it's the CPU that sets the performance/consumption ceiling.

[¹] https://www.anandtech.com/show/13591/the-intel-core-i9-9900k... [²] https://www.intel.com/content/www/us/en/support/articles/000...


And you can do the same on quite a few AMD boards with "Precision Boost Overdrive", which gives you a 300W TDP on 1950x if your cooling can handle it. Or you can ignore turbo and flat overclock the thing, which depending on workload will produce better results.

Source: I have one of these.


I think even on 1st gen Zen processor intel had performance/watt advantage though the gap was lower. It's Zen 2 that has completely obliterated intel in performance/watt. Which is almost a bigger shock than the AMD taking the performance crown. Taking both the performance and efficiency crown in a single generation specially when it's not even a full new generation is beyond impressive. Of course it was only feasible with the foundation build up by Zen 1 but it's still very encouraging after the major stagnation that was there in desktop processor for a long while.


It's going to depend heavily on which specific CPU you're talking about and which specific workload, but at least on Handbrake the Ryzen 2700 was the performance/watt king per legitreviews: https://legitreviews.com/wp-content/uploads/2018/05/performa...

Maybe Intel took that back with their lower clocked 8c/16t chips, dunno, this isn't something that comes up all that much in consumer reviews. But there's at least not a significant gap in either direction, it's pretty much a wash.

On the server side of things Anandtech didn't seem to go much into it but at least with this one: https://www.anandtech.com/show/11544/intel-skylake-ep-vs-amd...

The dual EPYC 7601 used 100w less than the Xeon competition in povray while also being the fastest system by a substantial margin at povray, too. Which would put performance, power, and performance/watt all firmly in the EPYC 7601's domain on that one test. And Intel took it back on MySQL. So 50/50 split.


There are a lot of factors to unpack here, but the 8700K has 2 less cores than the 2700X, which is the reason the 8700K is coming out behind. The direct comparison here is the 9900K, but the 9900K ships with significantly higher stock clocks (4.7 GHz all-core), which also reduces its perf/watt.

When limited to its "official" 95W TDP, the 9900K does about 4.3 GHz and has a higher perf/watt than Ryzen (both higher performance and lower power consumption).

So basically you are in a situation where the Ryzen pulls less at stock, has slightly higher efficiency at stock, but has a much lower clock ceiling. While the 9900K ships with much higher clocks and worse efficiency, but has a much lower power floor if you pull the clocks back to 2700X levels.

https://static.techspot.com/articles-info/1744/bench/HandBra...

https://static.techspot.com/articles-info/1744/bench/Power_H...

https://www.techspot.com/review/1744-core-i9-9900k-round-two...

Of note, the 2700X is actually pulling ~130W under AVX loads (33W more than the 95W-limited 9900K).

The Stilt noted that the default power limit AMD ships is 141.75W and the 2700X will run it for an unlimited amount of time (whereas Intel at least claims PL2 obeys a time limit, although in practice all mobo companies violate the spec and boost for an unlimited amount of time as well). So really "TDP" is a joke all around these days. Nobody really respects TDP limits when boosting, and it doesn't directly correspond to base clocks either (both 9900K and 2700X can run above baseclocks at rated TDP). It is just sort of a marketing number.

https://forums.anandtech.com/threads/ryzen-strictly-technica...

Epyc is a different matter and once again more cores translates into better efficiency than fewer, higher-clocked cores. But the gotcha there is that Infinity Fabric is not free either, the infinity fabric alone is pulling more than 100W on Epyc chips (literally half of the total power!).

https://www.anandtech.com/show/13124/the-amd-threadripper-29...

Similarly, the 2700X spends 25W on its Infinity Fabric, while an 8700K is only spending 8W. So, Infinity Fabric pulls roughly 3x as much power as Intel is spending on its Ringbus. This really hits the consumer chips a lot harder, mesh on the Skylake-X and Skylake-SP is closer to Infinity Fabric power levels (but still lower).

Plus, GF 14nm wasn't as good a node as Intel 14nm. So Ryzen is starting from a worse node.

Moneyshot, core for core, power efficiency on first-gen Ryzen and Epyc was inferior, but of course Epyc lets you have more cores than Xeon. Ryzen consumer platform's efficiency was strictly worse than Intel though.

And that goes double for laptop chips, which are the one area that Intel still dominates. Raven Ridge and Picasso are terrible for efficiency compared to Intel's mobile lineup. And AMD mobile won't be moving to 7nm until next year.

Because of that whole "nobody obeys TDP and it doesn't correspond to base clocks or any other performance level", we'll just have to wait for reviews and see what Zen2 and Epyc are actually like. I am really interested in the Infinity Fabric power consumption, that's potentially going to be the limitation as we move onto 7nm and core power goes down, while AMD scales chiplet count up further.


I somehow completely missed this coverage of Infinity Fabric power usage. I wonder if IF power usage percentage remains the same in this generation or it has been reduced. If not improvement of IF power usage would remain a viable opportunity to make these chips even more power efficient. It seems that given IF power usage it's clear that I was even more uninformed about the power usage of first gen Zen cores.


This is good to know. Are there any reputable benchmarks that show those advantages? Something like FLOPS/watt on some LAPACK or Tensorflow test, or amount of joules to compile the Linux kernel, or anything of this sort?


> It's Zen 2 that has completely obliterated intel in performance/watt. Which is almost a bigger shock than the AMD taking the performance crown.

Why is this shocking? Zen 2 is 7nm and Intel's latest is at 14nm. It would be a far bigger shock if they didn't beat Intel in performance/watt. Zen 2 vs whatever Intel releases on 10nm in the next ~6-18 months is a much more interesting comparison.


I believe he's speaking historically.

AMD wasn't really a consideration but for budget until they launched the Athlon in the late 90s. The success of Athlon was as much about Intel's fumble with Netburst as it was with Athlon being a solid competitor.

It took Intel almost a decade to roll out Core and in that time AMD failed to capture the market despite making tremendous gains and legitimizing itself.

Ultimately AMD fumbled with the Bulldozer/Excavator lines of CPUs and lost almost everything they had gained.

The reasons AMD couldn't capture the market are complex but the short answer is that Intel influences every aspect of a computer from software, to compilers, to peripherals, to firmware.


> It took Intel almost a decade to roll out Core and in that time AMD failed to capture the market despite making tremendous gains and legitimizing itself.

And by AMD failed you mean Intel used illegal means to stop them from it, right ?

The US, Japanese and Korean fair trade comission equivalent all either blamed Intel or fined them. The EU was still too young in that area to be in time but in 2009 they gave one of their biggest fine ever at 1.45 billions € to Intel for what they did, along with an approriate "oh and if you do it again we won't be late, and won't be so nice".

Calling it "AMD failed to capture the market" is technically true, but that's one funny point of view.


>AMD failed to capture the market despite making tremendous gains and legitimizing itself.

Because Intel played dirty and illegal[0].

[0]: https://www.youtube.com/watch?v=osSMJRyxG0k


> Ultimately AMD fumbled with the Bulldozer/Excavator lines of CPUs

I've heard this baseless assertion before but so far I've never heard any semblance of support. Why do you believe that AMD "fumbled" with their Bulldozer line?


As someone who works on low-level CPU performance code, everyone I work with "knows" that Bulldozer was a performance dud for most use cases.

This article about Zen starts with an overview of why Bulldozer failed to deliver: https://arstechnica.com/gadgets/2017/03/amds-moment-of-zen-f...


How about the fact that it performed about as good as the previous generation at multi-threaded workloads but worse at single-threadwd workloads?

Or that while it was power efficient at idle, it was exceptionally power hungry under load?

Maybe it was when the CEO admitted it failed to meet expectations, said we'd have to wait 4 years for a successor, and then stepped down?

Idk... I'm probably way off base.


Not Ryzen related, but seems you're pretty up to speed with AMD products. Does that include Radeon as well? I have a MBP and I am considering a Radeon VII for my external GPU (currently GTX 1080 but only usable in Windows. Thanks Mojave). My main concern though is thermals and noise. Does it perform on par with Nvidia there or little bit worse or considerably so? Power draw I'm not that concerned with.


In the most recent generations, Radeon has run hotter than nVidia cards for similar performance. Seems to be true of the Radeon VII as well [0].

[0] https://www.theverge.com/2019/2/11/18194190/amds-radeon-vii-...


thanks for that. That's a huge bummer. Really wish Apple wouldn't force the Metal issue with Nvidia. Yeah, it'd be nice and all, but as a user, I'm fine with the various scripts I have to run after macOS updates to get the card running again but they just nixed that outright. Oh well, hopefully AMD can solve the fan problems or Nvidia and Apple can work something out, either or.


The Radeon 5700 and 5700 XT are supposed to be competitive with the RTX 2060 and RTX 2070 at slightly lower prices. Only reference cards right now, but things might be looking up once OEMs have a chance to put better coolers on instead of AMD's reference blower.

I'm planning to hold out for next gen when they get ray tracing hardware to be a bit more future proof (my GTX 970's not dead yet), but since I'm thinking of trading my Wintendo out for a Mac + eGPU setup it's nice to see that AMD could actually be a good GPU option now.

Those were just announced this week, so keep an eye out for 3rd party benchmarks soon.


Will probably pull the trigger on a Radeon VII myself, only because of the better Linux drivers, and possibility of hackintosh usage. At least for my current system, I did a mid-cycle upgrade for the GPU (GTX 1080) and added NVME a couple years ago. Still running 4790K on 32gb ram, and does great for most stuff, but not so much for encoding or dev work (couple dbs and services in background).


Sadly they both appear to have a total board power 50W higher than NVIDIA's comparable model(s), so NVIDIA might still win out on power. But we'll have to wait for third-party benchmarks to confirm that.


The new Radeons are still worse in power/performance though.


I would wait the month or so for Navi cards to show up and see how they do on thermals and if the application performance is to your liking; Navi is intended for midrange cards(says the PR) but getting similar performance to your 1080 is possible.

AMD's recent releases have a reputation of releasing at "hot/high-power" stock and then doing much better when undervolted. Navi will get the die shrink, so the results for both power and thermals are likely to be even better, but benchmarking needs to be done before we have a full picture of what's changed.


Keep in mind that Navi (RX 5700 series) is out next month and is a new architecture.


It looks like the latest AMD cards are a bit more power hungry than NVidia counterparts. On performance, the Radeon VII seems to be closely aligned to the RTX 2080 (not TI). The RX 5700 XT is around the RTX 2070, and the RX 5700 is above the RTX 2060. Depending on your workload, and if it can leverage the AMD targets, it could be good to great. If you don't actually care about RTX features (and the slow framerates that comes with it), then you're better off with AMD for the price difference, even considering the extra power needs.


I guess you are not aware that at this point everything suggest that the upcoming AMD 7nm processors are significantly more efficient that similar performance intel processor.

The Ryzen processor is 105w vs. the significantly slower intel processor is 165w. Additionally also AMD's TDP numbers are much more accurate in terms of real peak usage than intel. So almost certainly Zen 2 processor will have a much better performance/power ratio than corresponding intel one moving forward. That was definitely not the case for AMD in their last generation.


In that case, Intel should be in big trouble, because the advertised TDP seems to be less than half the power required to reach the chips' advertised performance:

> In this case, for the new 9th Generation Core processors, Intel has set the PL2 value to 210W. This is essentially the power required to hit the peak turbo on all cores, such as 4.7 GHz on the eight-core Core i9-9900K. So users can completely forget the 95W TDP when it comes to cooling.

https://www.anandtech.com/show/13400/intel-9th-gen-core-i9-9...

In other words

1) Intel's advertised "TDP" = true? (they don't use the same original meaning of the "Total Design Power" anymore)

2) Intel's advertised peak performance = true (with caveats such as all the mitigations required for the CPU flaws, which lower performance)

3) Intel's advertised peak performance at advertised TDP = BIG FAT LIE


The Core i9-9980XE pulls from 199W->245W depending on the workload and AVX instructions being used under stock settings. The Ryzen is listed as a 105W part, although when overclocked, I'm sure it will pull more than that.

https://www.tomshardware.com/reviews/intel-core-i9-9980xe-cp...


That means that the Ryzen part can pull ninety-four more watts- and basically double its TDP and still go under what the i9-9980XE will draw.

There also seem to be some new X570 motherboards that will actually support this level of craziness, too.


AMD chip at 105w (and AMD give real tdp), while the Intel chip is at 185w (and Intel give tdp in non turbo mode).

Intel is currently getting absolutely destroyed on that front.


> AMD chip at 105w (and AMD give real tdp), while the Intel chip is at 185w (and Intel give tdp in non turbo mode).

Both AMD & Intel list TDP for all cores used at base clock frequencies. The major difference is Intel heavily leverages what they call all-core boost to never actually run at their base clock, allowing them to list rather ridiculously low base clock frequencies. For example the i9-9900K's base frequency is listed at 3.6ghz, but the all-core turbo frequency is a whopping 4.7ghz. That difference is how you end up with a CPU that expects a whopping 210W of sustained power delivery (the 9900K's PL2 spec) even though its TDP is only 95W.

AMD doesn't (didn't?) have an all-core boost concept, so their base clocks are just higher, making their TDP number closer to real-world. But still technically base-clock numbers and not boost numbers, and so you will still see power draw in excess of TDP.


AWS and Azure have had EPYC instances available for a while, so the math (presumably) works out for cloud operators.


AWS also has aarch64 instances (we use them for CI and simulations at my day job). It’s starting to look like Intel’s days are numbered.


The difference in power is not enough to offset the price/performance differences. AMD is actually on par or better than Intel with Zen 2 anyway.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: