• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

Ryzen benchmarking and overclocking results

So what? You think all game creators will jump to optimizing for 16 threads, because that's what AMD offers for high-end gaming desktops? Get real. :)
Game studios have to earn money by selling copies, not race for the best fps possible on high-end desktops.
Most people will still game on 4 cores - either from Intel or AMD (Ryzen 3/5 will be mainstream, Ryzen 7 is for enthusiasts).
And keep in mind we're still talking about desktops, while most PCs are laptops (also among frequent gamers).

Intel manages to squeeze a 65W i7 down to 45W laptop-sensible version (e.g. 6700 -> 6700HQ).

Ryzen 7 has a nominal TDP of 95W, but it's been shown that it can suck 120W in max load.
This means that - to keep 8 cores - AMD would have to cut power draw by half and that would simply obliterate single-thread performane (already not mind-blowing).
So yes, AMD managed to make an excellent flagship 8C/16T CPU for the Ryzen line.
But in the long run, when market becomes saturated with Ryzen CPUs, it's highly probable that most of them (by far) will be 4 core units.

Above all, a CPU design has to be very flexible to work well in many different tasks, in many different types of PCs.
However, a GPU doesn't have to be flexible. It can concentrate on multi-thread performance above anything else.
Why change this status quo? Why force game studios to spend huge money on optimizing games for the minority of >4C owners, when it really doesn't affect the image quality nor FPS?

Sure, if you're very brave, you can blame all the game studios that they don't utilize more cores than 99% of their clients have.
But we have (at least) equally good reasons to blame AMD for not making a CPU that matches the current state gaming market. :)

You do realize Intel measures TDP at base clock, right? Also, that laptop version of 6700 has very little to do with the desktop version. It has been this way for years. Also, 16 threads laptop CPU. Haven't seen that one yet have we?
 
My plan is

My main rig will go dual-socket = E5-2683v3 x 2.
My media rig will switch from i7-6700 to Ryzen 1700.
 
Crucials website already lists some 3466 2x8gb sets as compatible with the crosshair 6 and taichi/fatality pro gaming

That doesn't mean the XMP will work or that they will run at rated speeds...I can tell you from using the Crosshair VI no XMP works
 
You do realize Intel measures TDP at base clock, right? Also, that laptop version of 6700 has very little to do with the desktop version. It has been this way for years. Also, 16 threads laptop CPU. Haven't seen that one yet have we?

Well..., THW tested the flagship chips:
http://www.tomshardware.com/reviews/skylake-intel-core-i7-6700k-core-i5-6600k,4252-11.html
6700K (TDP 91W) needed 101W - that's 10W over or +11%. THW called this "not really acceptable", which I have to agree.
Now a test of Ryzen (same page):
http://www.tomshardware.com/reviews/amd-ryzen-7-1800x-cpu,4951-11.html
Stressed 1800X (TDP 95W) needed 112W - 17W over, +18%. And this is at base clocks (what about XFR?).
I've seen tests (e.g. WCCFTECH) suggesting it can reach 120W.

We're used to the idea that Intel measures a "typical" consumption under load, while AMD gives you a physical limit. It's not valid anymore.
AMD wanted Ryzen to have a TDP significantly below Intel HEDT and they did just that. End of story.

As for the design differenes between 6700 and 6700HQ - I won't argue with that, but look at the comparison below.
Intel gave us (at the same time) 2 CPUs:
6700 6700HQ
TDP [W] 65 45 -30.8%
Base Freq [GHz] 3.4 2.6 -23.5%
3Dmark firestrike 10200 7190 -29.5%

No magic here - it's the same technology and performance scales with power usage.
The issue remains valid: will AMD show a 8-core mobile Ryzen with nice performance scaling?
Or will the mobile lineup end at Ryzen 5?
 
You do realize Intel measures TDP at base clock, right? Also, that laptop version of 6700 has very little to do with the desktop version. It has been this way for years. Also, 16 threads laptop CPU. Haven't seen that one yet have we?
Pretty sure they include boost too. If not, I'd wonder if AMD works the same way considering how quickly bulldozer/vishera/ryzen seem to ramp up.
 
Intel's own Arc page states TDP measurement is done with base clock.
 
Last edited:
Link please.. :)

And AMDs? They use more power at stock and seem to scale worse overclocking...

Edit: after thinking a bit.. seems counterintuitive considering how boost works in the first place (if it's under power, temps, workload, etc it boosts). If the tdp is without boost.... how would it boost in the first place????

http://www.intel.com/content/www/us...ology/turbo-boost/turbo-boost-technology.html

He is correct however

p1kalmig2k.jpg


Which makes sense since intel recommends this for cooling

zp5tm1qbi6.jpg


intel-skylake-procesor-chlodzenie-slajd.jpg
 
Wait, Intel has that liquid cooling solution?

I am so out of the loop after years of not caring about PC hardware. Is it actually usable or just slightly better than their sorry excuse of that stock cooler?
 
Wait, Intel has that liquid cooling solution?

I am so out of the loop after years of not caring about PC hardware. Is it actually usable or just slightly better than their sorry excuse of that stock cooler?

It is an H70/H80 with a different fan as far as I know
 
Thanks cdawall..

One has to wonder how boost works for all cores (as on some processors all cores go up a couple of bins)... or in general if the tdp is base clock. :)

And I still ask... how does amd do it? He mentioned it like there was a worthwhile difference bwtween the two, but no further details...
 
Thanks cdawall..

One has to wonder how boost works for all cores (as on some processors all cores go up a couple of bins)... or in general if the tdp is base clock. :)

I honestly don't know I know my Xeon is tagged as a 120W chip (THIS IS AN ES), but it is labeled as 1.6ghz, all 16 cores turbo to 2, 8 turbo to 2.4/2.5 IIRC and one core will do 3.0. I can't really test CPU only power consumption, but it doesn't draw that under any workload I have seen.
 
Ryzen has that inherent extreme power optimization to it because AMD was forced to use 28/32 nm nodes for so long, totally optimizing Excavator with 28nm on low power and moving it over to Zen as well. Combined with 14nm LPP it's a pure win now. Intel's 14nm node is still better, but their architecture isn't in that regard.
 
One has to wonder how boost works for all cores (as on some processors all cores go up a couple of bins)... or in general if the tdp is base clock. :)
Should be obvious. When power draw is below the defined TDP, boost is allowed. As you know, we can adjust these TDP limits in BIOS and in Intel's XTU tool, and we also use Turbo to OC.

AMD and Ryzen, as far as I understand, it is the same, but the max XFR speed is hard-coded into each CPU individually, meaning that different CPUs of the same SKU can actually have different max Turbo speeds. Need to spend more time playing in the BIOS to figure out how AMD is doing it exactly, to be honest. I assume it is adjustable also, given that I am pretty sure that AMD said they'd be offering per-core clock adjustments in the future (not sure how exactly).

It'll be interesting to see how that affects how we approach overclocking this platform.
 
Last edited:
So I ended up driving an hour to get a motherboard locally last night, and got my hands on the Aorus x370. Hands down the crappiest board i've ever owned (although the led effects were cool)... system ran great at 39.25x100 @ 1.45v (Ryzen 1700) with LLC at high.... until the 1.5 of the board's ramslots went bad and the system would not post even after cmos reset and flip to secondary bios.

I can still boot in single channel on the other two slots but man... what the hell. Turns out you can kill the actual slot on these by bumping dram termination voltage (last thing I did to try to get it stable at 12-13-13-13-36 @ 2400). do not recommend. Motherboard swaps are a pain in the butt.

How does .5 of a ramslot go bad you ask? Well on the one that's totally dead the system just sits there with a 0d code... on the other it acts like its going to boot (but takes about 3 times longer) and then BSODs into windows.

No problems whatsoever on the remaining slots, which are unfortunately slots 3 and 4 and run single channel.

Somehow not surprised though; gigabyte boards always seem to give me headaches everything else was out of stock.
 
Last edited:
Well, what board do we have in this thread? What memory did cdawall use? Each board's BIOS, at this point, will be tuned for specific modules. So what works on one board won't work on another, most likely. QVL suggestions are good, but at the same time, I'd talk to people that have the exact same hardware you plan to buy.


I am currently using ASRock Taichi and two modules form this kit : https://www.techpowerup.com/reviews/GSkill/F4-3200C14Q-32GTZSW/. I was able to simply enable XMP and have 3200 MHz work fully, but XFR is disabled (which is no problem since CPU OC and XFR should not be used together IMHO).

What are your impressions of this board? It's definitely the board i'll want to purchase but it's not currently available in my country :(
 
dave what does your 1700X thermals look like? Im now thinking maybe its my loop?

The 1700 is ice cold tho.
 
Should be obvious. When power draw is below the defined TDP, boost is allowed. As you know, we can adjust these TDP limits in BIOS and in Intel's XTU tool, and we also use Turbo to OC.

AMD and Ryzen, as far as I understand, it is the same, but the max XFR speed is hard-coded into each CPU individually, meaning that different CPUs of the same SKU can actually have different max Turbo speeds. Need to spend more time playing in the BIOS to figure out how AMD is doing it exactly, to be honest. I assume it is adjustable also, given that I am pretty sure that AMD said they'd be offering per-core clock adjustments in the future (not sure how exactly).

It'll be interesting to see how that affects how we approach overclocking this platform.
right.. so they test at a base clock and get 95W. Then you can boost a couple of bins and still be under? How does that work when we know clock increases raise power use? I can see why it works with fewer cores, but not with all.

dave what does your 1700X thermals look like? Im now thinking maybe its my loop?

The 1700 is ice cold tho.
what is "chilly"? Where are you reading these "chilly" temps?
 
So I ended up driving an hour to get a motherboard locally last night, and got my hands on the Aorus x370. Hands down the crappiest board i've ever owned (although the led effects were cool)... system ran great at 39.25x100 @ 1.45v (Ryzen 1700) with LLC at high.... until the 1.5 of the board's ramslots went bad and the system would not post even after cmos reset and flip to secondary bios.

I can still boot in single channel on the other two slots but man... what the hell. Turns out you can kill the actual slot on these by bumping dram termination voltage (last thing I did to try to get it stable at 12-13-13-13-36 @ 2400). do not recommend. Motherboard swaps are a pain in the butt.

How does .5 of a ramslot go bad you ask? Well on the one that's totally dead the system just sits there with a 0d code... on the other it acts like its going to boot (but takes about 3 times longer) and then BSODs into windows.

No problems whatsoever on the remaining slots, which are unfortunately slots 3 and 4 and run single channel.

Somehow not surprised though; gigabyte boards always seem to give me headaches everything else was out of stock.
That's super interesting and well noted! I grabbed that board cause I like gigabyte and Asus gives me problems. I guess I'll really wait till they update their bios!
 
right.. so they test at a base clock and get 95W. Then you can boost a couple of bins and still be under? How does that work when we know clock increases raise power use? I can see why it works with fewer cores, but not with all.

what is "chilly"? Where are you reading these "chilly" temps?
right.. so they test at a base clock and get 95W. Then you can boost a couple of bins and still be under? How does that work when we know clock increases raise power use? I can see why it works with fewer cores, but not with all.

what is "chilly"? Where are you reading these "chilly" temps?

ryzen master... once CPU (Ryzen 1700) idles at 27C and full load @ 3925Ghz 1.45V @ 64C-66C the other (1800X) idles at 54C STOCK... loads in the 80's also stock multiple reseatings/repastings. Needless to say it's going back to Fry's tomorrow for a refund.


That's super interesting and well noted! I grabbed that board cause I like gigabyte and Asus gives me problems. I guess I'll really wait till they update their bios!
Just don't mess with the ram too much yet and you should be ok.
 
Last edited:
hype? Not met. Intel killer? At multi-threaded workloads, yes. Too early to know? Not at all.

AMD had a botched launch, sending people hardware that wasn't truly compatible. Ryzen is a good platform, but in some workloads (ie gaming), Intel wins when comparing stock CPU speeds. OC'ed, Intel still gets more raw frequency, so wins.

So, Intel is not in trouble from Ryzen at all. but should AMD be able to push the clock speeds up... Intel could be beat in all workloads. AMD just needs these chips to clock to 5 GHz on "traditional" cooling methods like Kaby Lake does. Seeing how Kaby lake has 65W and 95W "OC" models, just like Ryzen, this might be possible in the future, but not right now.

That's about what I expected.
 
Back
Top