• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

AMD "Renoir" Die Annotation Raises Hopes of Desktop Chips Featuring x16 PEG

btarunr

Editor & Senior Moderator
Staff member
Joined
Oct 9, 2007
Messages
47,297 (7.53/day)
Location
Hyderabad, India
System Name RBMK-1000
Processor AMD Ryzen 7 5700G
Motherboard ASUS ROG Strix B450-E Gaming
Cooling DeepCool Gammax L240 V2
Memory 2x 8GB G.Skill Sniper X
Video Card(s) Palit GeForce RTX 2080 SUPER GameRock
Storage Western Digital Black NVMe 512GB
Display(s) BenQ 1440p 60 Hz 27-inch
Case Corsair Carbide 100R
Audio Device(s) ASUS SupremeFX S1220A
Power Supply Cooler Master MWE Gold 650W
Mouse ASUS ROG Strix Impact
Keyboard Gamdias Hermes E2
Software Windows 11 Pro
VLSI engineer Fritzchens Fritz, famous for high-detail EM photography of silicon dies and annotations of them, recently published his work on AMD's 7 nm "Renoir" APU silicon. His die-shots were annotated by Nemez aka GPUsAreMagic. The floor-plan of the silicon shows that the CPU component finally dwarfs the iGPU component, thanks to double the CPU cores over the previous-gen "Picasso" silicon, spread over two CCXs (compute complexes). The CCX on "Renoir" is visibly smaller than the one on the "Zen 2" CCDs found in "Matisse" and "Rome" MCMs, as the L3 cache is smaller, at 4 MB compared to 16 MB. Being MCMs with disintegrated memory controllers, it makes more sense for CCDs to have more last-level cache per CCX.

We also see that the iGPU features no more than 8 "Vega" NGCUs, so there's no scope for "Renoir" based desktop APUs to feature >512 stream processors. AMD attempted to compensate for the NGCU deficit by dialing up engine clocks of the iGPU by over 40% compared to those on "Picasso." What caught our eye in the annotation is the PCI-Express physical layer. Apparently the die indeed has 20 PCI-Express lanes besides an additional 4 lanes that can be configured as two SATA 6 Gbps ports thanks to SerDes flexibility.



This would mean that "Renoir" can finally spare 16 lanes toward PEG (PCI-Express graphics, or the main x16 slot on your motherboard), besides 4 lanes toward the chipset-bus, and the final four lanes allocated to the M.2 NVMe slot that's wired to the AM4 socket, on a typical desktop platform. On the mobile platforms, "Renoir" processors spare no more than 8 lanes toward PEG (discrete graphics), including when paired with discrete GPUs such as the GeForce RTX 2060 (mobile), which is capable of gen 3.0 x16. Previous generation desktop APUs such as "Picasso" and "Raven Ridge" spare no more than 8 PCIe gen 3.0 lanes toward PEG, even on the desktop platform. x16 PEG capability would bolster the credentials of desktop "Renoir" processors for premium gaming PC builds, using some of the top SKUs such as the Ryzen 7 4700G.

View at TechPowerUp Main Site
 
Joined
Feb 20, 2019
Messages
8,339 (3.91/day)
System Name Bragging Rights
Processor Atom Z3735F 1.33GHz
Motherboard It has no markings but it's green
Cooling No, it's a 2.2W processor
Memory 2GB DDR3L-1333
Video Card(s) Gen7 Intel HD (4EU @ 311MHz)
Storage 32GB eMMC and 128GB Sandisk Extreme U3
Display(s) 10" IPS 1280x800 60Hz
Case Veddha T2
Audio Device(s) Apparently, yes
Power Supply Samsung 18W 5V fast-charger
Mouse MX Anywhere 2
Keyboard Logitech MX Keys (not Cherry MX at all)
VR HMD Samsung Oddyssey, not that I'd plug it into this though....
Software W10 21H1, barely
Benchmark Scores I once clocked a Celeron-300A to 564MHz on an Abit BE6 and it scored over 9000.
I still don't quite see the rationale behind dropping 3 CUs from the graphics core. Especially not since the single most distinctive point of an APU is its graphics.

Sure, they've ramped clocks up but most of the laptops reviewed so far aren't sustaining those clocks, so really it's still the same Vega CUs as it was in Raven ridge and the GPU clocks are still at the mercy of the power budget and RAM bandwidth, but now we have 3 fewer of them and Tiger lake early silicon is already outperforming it on beta drivers, apparently.

As for having 20PCIe lanes - What's the point? Anyone wanting to run a dGPU will just buy a 3700X instead and enjoy the superior performance of more cache and even more PCIe lanes.

If it were me designing the APU I would have thrown out the die area wasted on 20 PCIe lanes and used it to increase the CU count to 12 or 15 instead.
 
Joined
Sep 13, 2018
Messages
29 (0.01/day)
AsRock for ex. B550M Steel Legend:

cap1.PNG


Gigabyte for ex. B550I AORUS PRO AX:

cap2.PNG
 

theblackitglows

New Member
Joined
Jun 19, 2020
Messages
1 (0.00/day)
@Chrispy_

Have you even took a glimpse at the die? 8 Pcie lanes barely take the space of one CU.

Also how do you know Renoir will be worse at gaming? Let's wait for benchmarks.
 
Last edited:
Joined
Aug 6, 2017
Messages
7,412 (2.75/day)
Location
Poland
System Name Purple rain
Processor 10.5 thousand 4.2G 1.1v
Motherboard Zee 490 Aorus Elite
Cooling Noctua D15S
Memory 16GB 4133 CL16-16-16-31 Viper Steel
Video Card(s) RTX 2070 Super Gaming X Trio
Storage SU900 128,8200Pro 1TB,850 Pro 512+256+256,860 Evo 500,XPG950 480, Skyhawk 2TB
Display(s) Acer XB241YU+Dell S2716DG
Case P600S Silent w. Alpenfohn wing boost 3 ARGBT+ fans
Audio Device(s) K612 Pro w. FiiO E10k DAC,W830BT wireless
Power Supply Superflower Leadex Gold 850W
Mouse G903 lightspeed+powerplay,G403 wireless + Steelseries DeX + Roccat rest
Keyboard HyperX Alloy SilverSpeed (w.HyperX wrist rest),Razer Deathstalker
Software Windows 10
Benchmark Scores A LOT
x16 PEG capability would bolster the credentials of desktop "Renoir" processors for premium gaming PC builds
premium gaming pc build gotta have a 10tflop gpu at this point
not an integrated part
 
Joined
Apr 8, 2008
Messages
342 (0.06/day)
System Name Xajel Main
Processor AMD Ryzen 7 5800X
Motherboard ASRock X570M Steel Legened
Cooling Corsair H100i PRO
Memory G.Skill DDR4 3600 32GB (2x16GB)
Video Card(s) ZOTAC GAMING GeForce RTX 3080 Ti AMP Holo
Storage (OS) Gigabyte AORUS NVMe Gen4 1TB + (Personal) WD Black SN850X 2TB + (Store) WD 8TB HDD
Display(s) LG 38WN95C Ultrawide 3840x1600 144Hz
Case Cooler Master CM690 III
Audio Device(s) Built-in Audio + Yamaha SR-C20 Soundbar
Power Supply Thermaltake 750W
Mouse Logitech MK710 Combo
Keyboard Logitech MK710 Combo (M705)
Software Windows 11 Pro
I still don't quite see the rationale behind dropping 3 CUs from the graphics core. Especially not since the single most distinctive point of an APU is its graphics.

Sure, they've ramped clocks up but most of the laptops reviewed so far aren't sustaining those clocks, so really it's still the same Vega CUs as it was in Raven ridge and the GPU clocks are still at the mercy of the power budget and RAM bandwidth, but now we have 3 fewer of them and Tiger lake early silicon is already outperforming it on beta drivers, apparently.

As for having 20PCIe lanes - What's the point? Anyone wanting to run a dGPU will just buy a 3700X instead and enjoy the superior performance of more cache and even more PCIe lanes.

If it were me designing the APU I would have thrown out the die area wasted on 20 PCIe lanes and used it to increase the CU count to 12 or 15 instead.

That gave them more die area for more CPU Core area, allowed them to add two 4C CCX. added more area for the IMC to support both DDR4 and LP-DDR4, plus other PCIe lanes and other features.
Graphics CU's consume too much area, seeing the APU without the GPU (which includes the CU (PS other GPU logics like the ROB), Media & Display Engines and the Display PHY's), they can't reduce the size of the CU, it's the easier part to use lower CU's with higher clocks than more CU's with lower clocks.
 
Joined
Nov 6, 2016
Messages
1,773 (0.60/day)
Location
NH, USA
System Name Lightbringer
Processor Ryzen 7 2700X
Motherboard Asus ROG Strix X470-F Gaming
Cooling Enermax Liqmax Iii 360mm AIO
Memory G.Skill Trident Z RGB 32GB (8GBx4) 3200Mhz CL 14
Video Card(s) Sapphire RX 5700XT Nitro+
Storage Hp EX950 2TB NVMe M.2, HP EX950 1TB NVMe M.2, Samsung 860 EVO 2TB
Display(s) LG 34BK95U-W 34" 5120 x 2160
Case Lian Li PC-O11 Dynamic (White)
Power Supply BeQuiet Straight Power 11 850w Gold Rated PSU
Mouse Glorious Model O (Matte White)
Keyboard Royal Kludge RK71
Software Windows 10
premium gaming pc build gotta have a 10tflop gpu at this point
not an integrated part

Yeah, the article saying that thanks to the x16 PCIe lanes on the APU, it can be teamed with discrete graphics... I thought that was obvious
 

ARF

Joined
Jan 28, 2020
Messages
4,670 (2.61/day)
Location
Ex-usa | slava the trolls
I still don't quite see the rationale behind dropping 3 CUs from the graphics core. Especially not since the single most distinctive point of an APU is its graphics.

Sure, they've ramped clocks up but most of the laptops reviewed so far aren't sustaining those clocks, so really it's still the same Vega CUs as it was in Raven ridge and the GPU clocks are still at the mercy of the power budget and RAM bandwidth, but now we have 3 fewer of them and Tiger lake early silicon is already outperforming it on beta drivers, apparently.

As for having 20PCIe lanes - What's the point? Anyone wanting to run a dGPU will just buy a 3700X instead and enjoy the superior performance of more cache and even more PCIe lanes.

If it were me designing the APU I would have thrown out the die area wasted on 20 PCIe lanes and used it to increase the CU count to 12 or 15 instead.


Ryzen 7 3700X is not faster than the Ryzen 7 4700G.

Look:


https://www.reddit.com/r/Amd/comments/c7ejdf
About 20% faster than the 2700X. Very nice.

About 5~6% behind ~5GHz 9900k with a ~13.6% frequency deficit.



The 3700x is about 12-19% faster than a 2700x.
 
Joined
Aug 6, 2017
Messages
7,412 (2.75/day)
Location
Poland
System Name Purple rain
Processor 10.5 thousand 4.2G 1.1v
Motherboard Zee 490 Aorus Elite
Cooling Noctua D15S
Memory 16GB 4133 CL16-16-16-31 Viper Steel
Video Card(s) RTX 2070 Super Gaming X Trio
Storage SU900 128,8200Pro 1TB,850 Pro 512+256+256,860 Evo 500,XPG950 480, Skyhawk 2TB
Display(s) Acer XB241YU+Dell S2716DG
Case P600S Silent w. Alpenfohn wing boost 3 ARGBT+ fans
Audio Device(s) K612 Pro w. FiiO E10k DAC,W830BT wireless
Power Supply Superflower Leadex Gold 850W
Mouse G903 lightspeed+powerplay,G403 wireless + Steelseries DeX + Roccat rest
Keyboard HyperX Alloy SilverSpeed (w.HyperX wrist rest),Razer Deathstalker
Software Windows 10
Benchmark Scores A LOT
Joined
Aug 6, 2017
Messages
7,412 (2.75/day)
Location
Poland
System Name Purple rain
Processor 10.5 thousand 4.2G 1.1v
Motherboard Zee 490 Aorus Elite
Cooling Noctua D15S
Memory 16GB 4133 CL16-16-16-31 Viper Steel
Video Card(s) RTX 2070 Super Gaming X Trio
Storage SU900 128,8200Pro 1TB,850 Pro 512+256+256,860 Evo 500,XPG950 480, Skyhawk 2TB
Display(s) Acer XB241YU+Dell S2716DG
Case P600S Silent w. Alpenfohn wing boost 3 ARGBT+ fans
Audio Device(s) K612 Pro w. FiiO E10k DAC,W830BT wireless
Power Supply Superflower Leadex Gold 850W
Mouse G903 lightspeed+powerplay,G403 wireless + Steelseries DeX + Roccat rest
Keyboard HyperX Alloy SilverSpeed (w.HyperX wrist rest),Razer Deathstalker
Software Windows 10
Benchmark Scores A LOT
So go read one of the Zephyrus G14 w/ Ryzen 9 4900HS reviews that's already out to gauge CPU gaming performance.
isn't 4900 different from 4700 ?
and how exactly do you suggest I compare it to 3700x ?
 
Joined
Feb 20, 2019
Messages
8,339 (3.91/day)
System Name Bragging Rights
Processor Atom Z3735F 1.33GHz
Motherboard It has no markings but it's green
Cooling No, it's a 2.2W processor
Memory 2GB DDR3L-1333
Video Card(s) Gen7 Intel HD (4EU @ 311MHz)
Storage 32GB eMMC and 128GB Sandisk Extreme U3
Display(s) 10" IPS 1280x800 60Hz
Case Veddha T2
Audio Device(s) Apparently, yes
Power Supply Samsung 18W 5V fast-charger
Mouse MX Anywhere 2
Keyboard Logitech MX Keys (not Cherry MX at all)
VR HMD Samsung Oddyssey, not that I'd plug it into this though....
Software W10 21H1, barely
Benchmark Scores I once clocked a Celeron-300A to 564MHz on an Abit BE6 and it scored over 9000.
@Chrispy_
Have you even took a glimpse at the die? 8 Pcie lanes barely take the space of one CU.
I'd argue that it's pretty close to two CUs. Eyeballing the annotated die area I think a PCIe x4 rectangle is about 10-20% more area than a single CU, so removing 8 lanes would make room for a Vega 10 or Vega 11. That wasn't really my point though because if you can believe it, I'm not actually an AMD APU layout designer!

Also how do you know Renoir will be worse at gaming? Let's wait for benchmarks.
I think you've been caught sleeping; Renoir APU graphics benchmarks have been out for a while. I think the Anand review landed about a month ago and people on these very forums have been buying Renoir laptops since before that.

Whilst it's true that Renoir's launch has been hampered by COVID, the earliest benchmarks acutally go back as far as February, with more concrete stuff appearing in April and May seeing general availability of several models of Renoir laptop. I'm annoyed slightly that the 4800U with LPDDR4X hasn't been reviewed yet, but I don't really think it matters at this point. Vega8 isn't good enough to make a meaningful upgrade over Raven Ridge and anyone wanting actuall GPU performance is going to have to wait until the next generation or settle for a dGPU; The benchmarks of 4700U + DDR4-3200 are within 20% of my own 2700U using CL14 DDR4-2400 and a 22W power limit (stock was 20W)

As for pre-release Tiger Lake vs current APUs? 30fps on beta/pre-release drivers vs 25fps for Renoir.

Like it or not, AMD APUs have always been slightly short of achieving AAA gaming at reasonable native-res 30fps. Renoir is such a sidegrade that AMD still haven't quite achieved that basic goal and now it's looking like Intel will beat them to it after AMD have been tantalising gamers with "almost enough" for three years. AMD had all the ingredients to bake a great APU and failed this generation. It's a CPU first and the GPU is clearly an afterthought with cut-down specs and less die area than before. I'm going to enjoy watching Intel kick their ass because as much as I want AMD to succeed, they've really fudged their APUs this generation and they need a kick up the ass.
 
Last edited:
Joined
May 30, 2015
Messages
1,942 (0.56/day)
Location
Seattle, WA
isn't 4900 different from 4700 ?
and how exactly do you suggest I compare it to 3700x ?
No. It's full Renoir on both. You're right in one way though, the current rumors surrounding 4700G suggest it'll boost higher.

Put as much energy into finding a good comparison point as you do arguing on tech forums and I'm sure you'll find a way.
 
Joined
Jan 24, 2011
Messages
180 (0.04/day)
That gave them more die area for more CPU Core area, allowed them to add two 4C CCX. added more area for the IMC to support both DDR4 and LP-DDR4, plus other PCIe lanes and other features.
Graphics CU's consume too much area, seeing the APU without the GPU (which includes the CU (PS other GPU logics like the ROB), Media & Display Engines and the Display PHY's), they can't reduce the size of the CU, it's the easier part to use lower CU's with higher clocks than more CU's with lower clocks.
I am looking at the included dieshot and 3CU are not much bigger than 1 Zen core, so It certainly didn't allow them to add one more 4C CCX, IMC, PCIe lanes and other stuff.

I still don't quite see the rationale behind dropping 3 CUs from the graphics core. Especially not since the single most distinctive point of an APU is its graphics.

Sure, they've ramped clocks up but most of the laptops reviewed so far aren't sustaining those clocks, so really it's still the same Vega CUs as it was in Raven ridge and the GPU clocks are still at the mercy of the power budget and RAM bandwidth, but now we have 3 fewer of them and Tiger lake early silicon is already outperforming it on beta drivers, apparently.
....
The clocks It can sustain is still higher than the previous generation.
What would happen If they kept 3CU or added another one for a total of 12CU? It would clock even lower than a 8CU IGP within a limited TDP, because 11-12CU consumes more power than 8CU. The limit is still powerbudget and bandwidth so why bother adding or keeping more CU If It won't significantly increase performance? If they resolve the bandwidth limit, then they can increase the CU count and set TDP higher and for 15W TDP there will be a cutdown version.
 
Last edited:
Joined
Sep 6, 2013
Messages
3,391 (0.82/day)
Location
Athens, Greece
System Name 3 desktop systems: Gaming / Internet / HTPC
Processor Ryzen 5 7600 / Ryzen 5 4600G / Ryzen 5 5500
Motherboard X670E Gaming Plus WiFi / MSI X470 Gaming Plus Max (1) / MSI X470 Gaming Plus Max (2)
Cooling Aigo ICE 400SE / Segotep T4 / Νoctua U12S
Memory Kingston FURY Beast 32GB DDR5 6000 / 16GB JUHOR / 32GB G.Skill RIPJAWS 3600 + Aegis 3200
Video Card(s) ASRock RX 6600 + GT 710 (PhysX) / Vega 7 integrated / Radeon RX 580
Storage NVMes, ONLY NVMes / NVMes, SATA Storage / NVMe, SATA, external storage
Display(s) Philips 43PUS8857/12 UHD TV (120Hz, HDR, FreeSync Premium) / 19'' HP monitor + BlitzWolf BW-V5
Case Sharkoon Rebel 12 / CoolerMaster Elite 361 / Xigmatek Midguard
Audio Device(s) onboard
Power Supply Chieftec 850W / Silver Power 400W / Sharkoon 650W
Mouse CoolerMaster Devastator III Plus / CoolerMaster Devastator / Logitech
Keyboard CoolerMaster Devastator III Plus / CoolerMaster Devastator / Logitech
Software Windows 10 / Windows 10&Windows 11 / Windows 10
I still don't quite see the rationale behind dropping 3 CUs from the graphics core. Especially not since the single most distinctive point of an APU is its graphics.
Renoir with 8CUs offers the best iGPU, for now and AMD also sells discrete graphics cards. So they offered as much performance as necessary to
- be faster than the competition
- not be slower than last gen APUs
- not threaten their discrete GPU sales


As an AMD fan for more than 20 years, I hope Intel to teach them a lesson.

As for having 20PCIe lanes - What's the point? Anyone wanting to run a dGPU will just buy a 3700X instead and enjoy the superior performance of more cache and even more PCIe lanes.
You would totally cripple the expandability options of the motherboard with only 8 lanes.
 
Joined
Jul 3, 2018
Messages
847 (0.36/day)
Location
Haswell, USA
System Name Bruh
Processor 10700K 5.3Ghz 1.35v| i7 7920HQ 3.6Ghz -180Mv |
Motherboard Z490 TUF Wifi | Apple QMS180 |
Cooling EVGA 360MM | Laptop HS |
Memory DDR4 32GB 3600Mhz CL16 | LPDDR3 16GB 2133Mhz CL20 |
Video Card(s) Asus ROG Strix 3080 (2100Mhz/18Ghz)|Radeon Pro 560 (1150Mhz/1655Mhz)|
Storage Many SSDs, ~24TB HDD/8TB SSD
Display(s) S2719DGF, HP Z27i, Z24n| 1800P 15.4" + ZR30W + iPad Pro 10.5 2017
Case NR600 | MBP 2017 15" Silver | MSI GE62VR | Elite 120 Advanced
Audio Device(s) Lol imagine caring about audio
Power Supply 850GQ | Apple 87W USB-C |
Mouse Whatever I have on hand + trackpads (Lanchead TE)
Keyboard HyperX Origins Alloy idk
Software W10 20H2|W10 1903 LTSC/MacOS 11
Benchmark Scores No.
I still don't quite see the rationale behind dropping 3 CUs from the graphics core. Especially not since the single most distinctive point of an APU is its graphics.

Sure, they've ramped clocks up but most of the laptops reviewed so far aren't sustaining those clocks, so really it's still the same Vega CUs as it was in Raven ridge and the GPU clocks are still at the mercy of the power budget and RAM bandwidth, but now we have 3 fewer of them and Tiger lake early silicon is already outperforming it on beta drivers, apparently.

As for having 20PCIe lanes - What's the point? Anyone wanting to run a dGPU will just buy a 3700X instead and enjoy the superior performance of more cache and even more PCIe lanes.

If it were me designing the APU I would have thrown out the die area wasted on 20 PCIe lanes and used it to increase the CU count to 12 or 15 instead.
The point is, assuming you can sustain the clock speed (which alot of laptops can, when the TDP is unlocked) you can get significantly more performance.

As for 20 PCIE lanes, its something for people who are interested in it, more is always better I guess. There really isn't a reason to dunk on it IMO, its better than the x8 we where fed previously

As for the 3700X vs the (potentially named) 4700G, the biggest deficit of the 3700X as well as other Zen 2 CPUs is the core to core latency, which wouldn't be present here,. See the 3100 to 3300X differences, assuming that the difference of moving to a single CCX will benefit the (potential) 4700G, it should perform the same, if not better.
 
Joined
Apr 29, 2020
Messages
141 (0.08/day)
I still don't quite see the rationale behind dropping 3 CUs from the graphics core. Especially not since the single most distinctive point of an APU is its graphics.

Sure, they've ramped clocks up but most of the laptops reviewed so far aren't sustaining those clocks, so really it's still the same Vega CUs as it was in Raven ridge and the GPU clocks are still at the mercy of the power budget and RAM bandwidth, but now we have 3 fewer of them and Tiger lake early silicon is already outperforming it on beta drivers, apparently.

As for having 20PCIe lanes - What's the point? Anyone wanting to run a dGPU will just buy a 3700X instead and enjoy the superior performance of more cache and even more PCIe lanes.

If it were me designing the APU I would have thrown out the die area wasted on 20 PCIe lanes and used it to increase the CU count to 12 or 15 instead.

It could be that AMD consider (somewhat validly) that the graphics performance provided is enough for the target market (after all, Intel has sold garbage igpu for years without it being an impediment to market success). It could also be that current memory bandwidth available from DDR4 is not enough to support higher iGPU performance, whether from a wider but slower igpu like in Picasso, or a narrower but faster igpu from Renoir. Remember you have a 8 core high performance CPU and a MX250 level GPU sharing less bandwidth than a GT 1030. The alternative is AMD starting to embed dedicated memory but then that runs into the first issue, the market won't pay extra for performance it doesn't need. Gamers will continue to go with dgpu's, and businesses couldn't give a stuff. The hobbyist ITX crowd that might care about higher performance igpu's are a miniscule market for a company as resource constrained as AMD to target.
 
Joined
Sep 6, 2013
Messages
3,391 (0.82/day)
Location
Athens, Greece
System Name 3 desktop systems: Gaming / Internet / HTPC
Processor Ryzen 5 7600 / Ryzen 5 4600G / Ryzen 5 5500
Motherboard X670E Gaming Plus WiFi / MSI X470 Gaming Plus Max (1) / MSI X470 Gaming Plus Max (2)
Cooling Aigo ICE 400SE / Segotep T4 / Νoctua U12S
Memory Kingston FURY Beast 32GB DDR5 6000 / 16GB JUHOR / 32GB G.Skill RIPJAWS 3600 + Aegis 3200
Video Card(s) ASRock RX 6600 + GT 710 (PhysX) / Vega 7 integrated / Radeon RX 580
Storage NVMes, ONLY NVMes / NVMes, SATA Storage / NVMe, SATA, external storage
Display(s) Philips 43PUS8857/12 UHD TV (120Hz, HDR, FreeSync Premium) / 19'' HP monitor + BlitzWolf BW-V5
Case Sharkoon Rebel 12 / CoolerMaster Elite 361 / Xigmatek Midguard
Audio Device(s) onboard
Power Supply Chieftec 850W / Silver Power 400W / Sharkoon 650W
Mouse CoolerMaster Devastator III Plus / CoolerMaster Devastator / Logitech
Keyboard CoolerMaster Devastator III Plus / CoolerMaster Devastator / Logitech
Software Windows 10 / Windows 10&Windows 11 / Windows 10
It could also be that current memory bandwidth available from DDR4 is not enough to support higher iGPU performance, whether from a wider but slower igpu like in Picasso, or a narrower but faster igpu from Renoir.

The difference in performance between for example, 3200G and 3400G, even with slower Zen+ cores, shows that 11 CUs can make a difference compared to 8 CUs even with that bandwidth that a dual channel DDR4 can provide.

The alternative is AMD starting to embed dedicated memory but then that runs into the first issue, the market won't pay extra for performance it doesn't need.
Sideport memory was a feature that someone could find in $50 AM3 motherboards. So, it's not really that much expensive and definitely the technology is available for over 15 years. They don't have to develop something new.
 
Joined
Apr 29, 2020
Messages
141 (0.08/day)
The difference in performance between for example, 3200G and 3400G, even with slower Zen+ cores, shows that 11 CUs can make a difference compared to 8 CUs even with that bandwidth that a dual channel DDR4 can provide.

Sideport memory was a feature that someone could find in $50 AM3 motherboards. So, it's not really that much expensive and definitely the technology is available for over 15 years. They don't have to develop something new.

That's irrelevant to my argument, 8CU Renoir in mobile APU's performs better than desktop 11CU Picasso, because of the clockspeed difference.
. It is likely that desktop Renoir will perform higher due to higher clocks from having a higher power budget than the 4800HS in the video (which also only has 7CU's, not 8).

My point is AMD may have determined that the APU performance provided by 8 CU Renoir is the peak that can be achieved with DDR4 which is why they cut it down to 8CU max, saving silicon for more CPU cores.

Regarding Sideport memory, I remember that as I had a mobo with it (256mb, 32 bit from memory). But again, you miss my point, it is not a technical challenge that prevents them, it is an economic one. AMD is providing more iGPU power than people are willing to pay for, so it makes no sense to increase mobo cost to make a faster iGPU when people who care get a dGPU and people that don't are happy with what is provided (which is double what Intel are currently providing, considering tiger lake isn't released yet).
 
Joined
Sep 6, 2013
Messages
3,391 (0.82/day)
Location
Athens, Greece
System Name 3 desktop systems: Gaming / Internet / HTPC
Processor Ryzen 5 7600 / Ryzen 5 4600G / Ryzen 5 5500
Motherboard X670E Gaming Plus WiFi / MSI X470 Gaming Plus Max (1) / MSI X470 Gaming Plus Max (2)
Cooling Aigo ICE 400SE / Segotep T4 / Νoctua U12S
Memory Kingston FURY Beast 32GB DDR5 6000 / 16GB JUHOR / 32GB G.Skill RIPJAWS 3600 + Aegis 3200
Video Card(s) ASRock RX 6600 + GT 710 (PhysX) / Vega 7 integrated / Radeon RX 580
Storage NVMes, ONLY NVMes / NVMes, SATA Storage / NVMe, SATA, external storage
Display(s) Philips 43PUS8857/12 UHD TV (120Hz, HDR, FreeSync Premium) / 19'' HP monitor + BlitzWolf BW-V5
Case Sharkoon Rebel 12 / CoolerMaster Elite 361 / Xigmatek Midguard
Audio Device(s) onboard
Power Supply Chieftec 850W / Silver Power 400W / Sharkoon 650W
Mouse CoolerMaster Devastator III Plus / CoolerMaster Devastator / Logitech
Keyboard CoolerMaster Devastator III Plus / CoolerMaster Devastator / Logitech
Software Windows 10 / Windows 10&Windows 11 / Windows 10
That's irrelevant to my argument, 8CU Renoir in mobile APU's performs better than desktop 11CU Picasso, because of the clockspeed difference.
. It is likely that desktop Renoir will perform higher due to higher clocks from having a higher power budget than the 4800HS in the video (which also only has 7CU's, not 8).

My point is AMD may have determined that the APU performance provided by 8 CU Renoir is the peak that can be achieved with DDR4 which is why they cut it down to 8CU max, saving silicon for more CPU cores.

Regarding Sideport memory, I remember that as I had a mobo with it (256mb, 32 bit from memory). But again, you miss my point, it is not a technical challenge that prevents them, it is an economic one. AMD is providing more iGPU power than people are willing to pay for, so it makes no sense to increase mobo cost to make a faster iGPU when people who care get a dGPU and people that don't are happy with what is provided (which is double what Intel are currently providing, considering tiger lake isn't releases yet).
It's not just clockspeed. Renoir comes with Zen 2 cores that perform much better in applications like games. Not to mention that we also talk about real cores, not multithreading and also Renoir comes with lower latency between the cores.

And no, the argument wasn't irrelevant. You just didn't understood it. Even with slower Zen+ cores the difference between a 3200G and a 3400G is there, even in games where not more than 4 cores/threads are needed. This means that 11 CUs in Renoir could offer more performance than 8 CUs. And because of that 65W TDP that you mention, the iGPU in a desktop Renoir could really shine. But then, who needs an RX 550? Probably no one.

Sideport memory could be used in mini ITX and micro ATX motherboards that where going to be used in mini PCs without discrete graphics.

Look, you make a lot of assumptions to support your point of view, even about what people want. No problem with that. But assumptions are not facts and no, what people need is not what you believe they need. And people sometime know what they need, but they don't know how to get it. So you see examples where someone is going out and buys a GT 730 with 4GB DDR3 memory on a 64bit data bus, to play games because the iGPU is slow.

Features like sideport memory are there as an option. Offering sideport memory doesn't mean that every motherboard out there will rush to implement it making it immediately more expensive. You are wrong here, again.

P.S. If Intel manages to win the iGPU battle and next AMD APUs come with 12 CUs, sideport memory and Hybrid graphics, then let's see what you will say then.
 
Joined
Apr 29, 2020
Messages
141 (0.08/day)
It's not just clockspeed. Renoir comes with Zen 2 cores that perform much better in applications like games. Not to mention that we also talk about real cores, not multithreading and also Renoir comes with lower latency between the cores.

And no, the argument wasn't irrelevant. You just didn't understood it. Even with slower Zen+ cores the difference between a 3200G and a 3400G is there, even in games where not more than 4 cores/threads are needed. This means that 11 CUs in Renoir could offer more performance than 8 CUs. And because of that 65W TDP that you mention, the iGPU in a desktop Renoir could really shine. But then, who needs an RX 550? Probably no one.

Sideport memory could be used in mini ITX and micro ATX motherboards that where going to be used in mini PCs without discrete graphics.

Look, you make a lot of assumptions to support your point of view, even about what people want. No problem with that. But assumptions are not facts and no, what people need is not what you believe they need. And people sometime know what they need, but they don't know how to get it. So you see examples where someone is going out and buys a GT 730 with 4GB DDR3 memory on a 64bit data bus, to play games because the iGPU is slow.

Features like sideport memory are there as an option. Offering sideport memory doesn't mean that every motherboard out there will rush to implement it making it immediately more expensive. You are wrong here, again.

P.S. If Intel manages to win the iGPU battle and next AMD APUs come with 12 CUs, sideport memory and Hybrid graphics, then let's see what you will say then.

Are you seriously trying to claim that the Picasso iGPU is CPU limited? Because that is the only way that Renoir's equal performance compared to higher CU Picasso could be attributed to its Zen 2 cores. Why are you comparing 3200g and 3400g in this discussion? Your point has been that AMD shrinking down the CU count in Renoir reflects them making a conscious decision to keep the status quo, my point has been that 1) 8CU Renoir likely beats 11CU Picasso (given power constrained 7CU Renoir equals 11 CU Picasso), and 2) AMD's decision to not press for 11CU Renoir is likely because they are hitting up against bandwidth limits from DDR4. It is the same as strapping a 2080 with 64b GDDR3, it wouldn't perform any better than a 1650 because it would be so bandwidth limited. You can already see that even with Picasso by seeing how much performance goes up with higher memory speeds, it's clearly starved of bandwidth, and Renoir would be even more starved when it has a 8 core high performance CPU to also feed. Tiger Lake may perform better, I wait to see that actually benched rather than iffy leaks with little detail. If Intel does release a 12 CU equivalent iGPU that fits within a similar power limit and out performs Renoir, then I'll congratulate Intel, and still think AMD made a sound technical decision to set the CU count as they did with Renoir.
 
Joined
Feb 20, 2019
Messages
8,339 (3.91/day)
System Name Bragging Rights
Processor Atom Z3735F 1.33GHz
Motherboard It has no markings but it's green
Cooling No, it's a 2.2W processor
Memory 2GB DDR3L-1333
Video Card(s) Gen7 Intel HD (4EU @ 311MHz)
Storage 32GB eMMC and 128GB Sandisk Extreme U3
Display(s) 10" IPS 1280x800 60Hz
Case Veddha T2
Audio Device(s) Apparently, yes
Power Supply Samsung 18W 5V fast-charger
Mouse MX Anywhere 2
Keyboard Logitech MX Keys (not Cherry MX at all)
VR HMD Samsung Oddyssey, not that I'd plug it into this though....
Software W10 21H1, barely
Benchmark Scores I once clocked a Celeron-300A to 564MHz on an Abit BE6 and it scored over 9000.
It could be that AMD consider (somewhat validly) that the graphics performance provided is enough for the target market (after all, Intel has sold garbage igpu for years without it being an impediment to market success).

This is both the most depressing and the most likely possibility.

"Intel got filthy rich by serving up hot garbage for decades, let's see if it works for us!"
- AMD
 
Top