• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

GDDR5X Puts Up a Fight Against HBM, AMD and NVIDIA Mulling Implementations

btarunr

Editor & Senior Moderator
Staff member
Joined
Oct 9, 2007
Messages
47,233 (7.55/day)
Location
Hyderabad, India
System Name RBMK-1000
Processor AMD Ryzen 7 5700G
Motherboard ASUS ROG Strix B450-E Gaming
Cooling DeepCool Gammax L240 V2
Memory 2x 8GB G.Skill Sniper X
Video Card(s) Palit GeForce RTX 2080 SUPER GameRock
Storage Western Digital Black NVMe 512GB
Display(s) BenQ 1440p 60 Hz 27-inch
Case Corsair Carbide 100R
Audio Device(s) ASUS SupremeFX S1220A
Power Supply Cooler Master MWE Gold 650W
Mouse ASUS ROG Strix Impact
Keyboard Gamdias Hermes E2
Software Windows 11 Pro
There's still a little bit of fight left in the GDDR5 ecosystem against the faster and more energy-efficient HBM standard, which has a vast and unexplored performance growth curve. The new GDDR5X standard offers double the bandwidth per-pin compared to current generation GDDR5, without any major design or electrical changes, letting GPU makers make a seamless and cost-effective transition to it.

In a presentation by a DRAM maker leaked to the web, GDDR5X is touted as offering double the data-rate per memory access, at 64 byte/access, compared to 32 byte/access by today's fastest GDDR5 standard, which is currently saturating its clock/voltage curve at 7 Gbps. GDDR5X breathes a new lease of live to the ageing DRAM standard, offering 10-12 Gbps initially, with a goal of 16 Gbps in the long term. GDDR5X chips will have identical pin layouts to their predecessors, and hence it should cost GPU makers barely any R&D to implement them.



When mass-produced by companies like Micron, GDDR5X is touted to be extremely cost-effective compared to upcoming HBM standards, such as HBM2. According to a Golem.de report, both AMD and NVIDIA are mulling GPUs that support GDDR5X, so it's likely that the two could reserve expensive HBM2 solutions for only their most premium GPUs, and implement GDDR5X on their mainstream/performance solutions, to keep costs competitive.

View at TechPowerUp Main Site
 
Last edited:
Joined
Oct 8, 2006
Messages
173 (0.03/day)
dont see why they wouldn't use it. no wonder amd will make a new series instead of just rebranding. this is a easy jump for their products.
 
Joined
Sep 1, 2015
Messages
152 (0.05/day)
Does anyone know how much the HBM and the GDDR5 cost. Because if it's like 20 to 50$ for 8GBs HBM then DDR5X make sense in sub 300$ market but not above this number. Hope to see hybrid cards that use both in sub 300$.
 
Joined
Oct 2, 2004
Messages
13,791 (1.87/day)
Though if it's the same as GDDR5, it's easy to implement, but carries the same size footprint, something HBM has advantage over being very compact.
 
Joined
Apr 8, 2008
Messages
339 (0.06/day)
System Name Xajel Main
Processor AMD Ryzen 7 5800X
Motherboard ASRock X570M Steel Legened
Cooling Corsair H100i PRO
Memory G.Skill DDR4 3600 32GB (2x16GB)
Video Card(s) ZOTAC GAMING GeForce RTX 3080 Ti AMP Holo
Storage (OS) Gigabyte AORUS NVMe Gen4 1TB + (Personal) WD Black SN850X 2TB + (Store) WD 8TB HDD
Display(s) LG 38WN95C Ultrawide 3840x1600 144Hz
Case Cooler Master CM690 III
Audio Device(s) Built-in Audio + Yamaha SR-C20 Soundbar
Power Supply Thermaltake 750W
Mouse Logitech MK710 Combo
Keyboard Logitech MK710 Combo (M705)
Software Windows 11 Pro
Bandwidth is only half part of the story, HBM uses much less power than GDDR5... and we know nothing about GDDR5X power usage also which I think will be even higher..

In high-end graphics card, having less demanding memory system means more power room for the GPU.. AMD used this fact with the Fury X... so I think high-end graphics will benefit more from HBM.. specially that HBM is still in it's first gen. second gen. HBM2 with next gen. AMD & NVIDIA is coming which is faster and more memory can be used than first gen. making more room for the GPU...

Plus it's not just power, HBM is also not as hot as GDDR5, specially that GDDR5 requires their own Power MOSFET which also consume power and generate heat

So HBM(specially HBM2) is a winner over GDDR5 in every side except cost... so I think it will stay in the high-end market... and specially Dual GPU cards where even PCB space is a challenge of it's own !!
 
Joined
Oct 30, 2008
Messages
1,768 (0.30/day)
System Name Lailalo
Processor Ryzen 9 5900X Boosts to 4.95Ghz
Motherboard Asus TUF Gaming X570-Plus (WIFI
Cooling Noctua
Memory 32GB DDR4 3200 Corsair Vengeance
Video Card(s) XFX 7900XT 20GB
Storage Samsung 970 Pro Plus 1TB, Crucial 1TB MX500 SSD, Segate 3TB
Display(s) LG Ultrawide 29in @ 2560x1080
Case Coolermaster Storm Sniper
Power Supply XPG 1000W
Mouse G602
Keyboard G510s
Software Windows 10 Pro / Windows 10 Home
Makes sense they'd use GDDR5 or GDDR5X for the sub $300 area. New tech always = higher intro price.
 
Joined
Apr 12, 2015
Messages
212 (0.06/day)
Location
ID_SUB
System Name Asus X450JB
Processor Intel Core i7-4720HQ
Motherboard Asus
Memory 2x 4GiB
Video Card(s) nVidia GT940M
Storage 2x 1TB
"Easier implementation" isn't gonna be the big deal here. Manufacturing-wise, once you use a specific technology in your product, it's easier to implement that technology across all of your similar product. Even more if the tech are significantly different, which is the case in HBM vs GDDR case.

Another factor like supply chain/avability or massively different cost will force manufacturer to use second choice, which made easier by the already familiar technology. That in turn made HBM the niche one, with corresponding cost overhead.
 

FordGT90Concept

"I go fast!1!11!1!"
Joined
Oct 13, 2008
Messages
26,259 (4.46/day)
Location
IA, USA
System Name BY-2021
Processor AMD Ryzen 7 5800X (65w eco profile)
Motherboard MSI B550 Gaming Plus
Cooling Scythe Mugen (rev 5)
Memory 2 x Kingston HyperX DDR4-3200 32 GiB
Video Card(s) AMD Radeon RX 7900 XT
Storage Samsung 980 Pro, Seagate Exos X20 TB 7200 RPM
Display(s) Nixeus NX-EDG274K (3840x2160@144 DP) + Samsung SyncMaster 906BW (1440x900@60 HDMI-DVI)
Case Coolermaster HAF 932 w/ USB 3.0 5.25" bay + USB 3.2 (A+C) 3.5" bay
Audio Device(s) Realtek ALC1150, Micca OriGen+
Power Supply Enermax Platimax 850w
Mouse Nixeus REVEL-X
Keyboard Tesoro Excalibur
Software Windows 10 Home 64-bit
Benchmark Scores Faster than the tortoise; slower than the hare.
Seems pointless. There is really no more room for growth until they get off 28nm. AMD is going to stick with HBM so the only potential client is NVIDIA and I doubt NVIDIA wants to re-release cards again before 14/16nm.

On the other hand, I could see GDD5X being used on cheaper 14/16nm cards
 
Joined
Sep 7, 2011
Messages
2,785 (0.58/day)
Location
New Zealand
System Name MoneySink
Processor 2600K @ 4.8
Motherboard P8Z77-V
Cooling AC NexXxos XT45 360, RayStorm, D5T+XSPC tank, Tygon R-3603, Bitspower
Memory 16GB Crucial Ballistix DDR3-1600C8
Video Card(s) GTX 780 SLI (EVGA SC ACX + Giga GHz Ed.)
Storage Kingston HyperX SSD (128) OS, WD RE4 (1TB), RE2 (1TB), Cav. Black (2 x 500GB), Red (4TB)
Display(s) Achieva Shimian QH270-IPSMS (2560x1440) S-IPS
Case NZXT Switch 810
Audio Device(s) onboard Realtek yawn edition
Power Supply Seasonic X-1050
Software Win8.1 Pro
Benchmark Scores 3.5 litres of Pale Ale in 18 minutes.
Bandwidth is only half part of the story, HBM uses much less power than GDDR5... and we know nothing about GDDR5X power usage also which I think will be even higher..
Bandwidth is less than half the story. GDDR5X likely costs the same to implement as standard 20nm GDDR5 which is a commodity product. HBM while relatively cheap to manufacture costs more to implement due to the microbumping required.
On the other hand, I could see GDD5X being used on cheaper 14/16nm cards
That would be the market. Small GPUs where size/power isn't much of an issue and bandwidth isn't paramount. A GDDR5X equipped board could still utilize a narrow memory bus to gain appreciably in bandwidth and still very likely be GPU constrained rather than bandwidth constrained.
 
Last edited:
Joined
Nov 4, 2005
Messages
11,982 (1.72/day)
System Name Compy 386
Processor 7800X3D
Motherboard Asus
Cooling Air for now.....
Memory 64 GB DDR5 6400Mhz
Video Card(s) 7900XTX 310 Merc
Storage Samsung 990 2TB, 2 SP 2TB SSDs, 24TB Enterprise drives
Display(s) 55" Samsung 4K HDR
Audio Device(s) ATI HDMI
Mouse Logitech MX518
Keyboard Razer
Software A lot.
Benchmark Scores Its fast. Enough.
So double the bit fetch length for high demand, with no mention of what it does to timings, which are of high importance as well, and how much more logic, termination, and many other things need to be built into expensive die space, where as I believe HBM is going to allow for termination on the substrate which is cheap compared and may as well allow for substrate based power management logic, and allows for shorter less attenuating signal paths allowing scale-able timing expectations.
 
Joined
Sep 1, 2015
Messages
152 (0.05/day)
Bandwidth is less than half the story. GDDR5X likely costs the same to implement as standard 20nm GDDR5 which is a commodity product. HBM while relatively cheap to manufacture costs more to implement due to the microbumping required.

That would be the market. Small GPUs where size/power isn't much of an issue and bandwidth isn't paramount. A GDDR5X equipped board could still utilize a narrow memory bus to gain appreciably in bandwidth and still very likely be GPU constrained rather than bandwidth constrained.
As for NVIDIA they usually replace all their products range each time they lunch new core design. AMD announced that they will build their new generation around Arctic island.
 
Joined
Apr 2, 2011
Messages
2,810 (0.56/day)
...?

Where is the fight?


GDDR5X would functionally be on lower end hardware, taking the place of retread architectures. This is similar to how GDDR3 existed for quite some time after GDDR5 was implemented, no?

If that was the case GDDR5X wouldn't compete with HBM, but be a lower cost implementation for lower end cards. HBM would be shuffled from the highest end cards downward, making the competition (if the HBM2 hype is to be believed) only on the brackets which HBM production could not fully serve with cheap chips.



As far as Nvidia versus AMD, why even ask about that? Most assuredly, Pascal and Arctic Islands are shooting for lower temperatures and powers. Upgrading RAM to be more power hungry opposes that goal, so the only place they'll be viable is where GPUs would be smaller, which is just the lower end cards. Again, there is no competition.

I guess I'm agreeing with most other people here. GDDR5X is RDRAM. It's better than what we've got right now, but not better than pursuing something else entirely.
 
Joined
Apr 19, 2011
Messages
2,198 (0.44/day)
Location
So. Cal.
I we're hearing of this just now... one would reasonable believe if a company like Micron had this for awhile now, face it didn't come as "lightning-bolt" a month ago. Let's say this has been something they had an inkling of a year ago. At that time the would've white papered this to Nvidia/AMD engineering.

For assurance I'll ask... this can be implement without change to the chips memory controller, meaning existing parts could've got GDDR5X as pretty much plug and play?

Now for Maxwell a year ago would've been late for sure, and/or for Maxwell with its memory compression and overall efficiency they could forgo supposed benefits in the price/margins they intended to be at. Though one would 've thought AMD could've made use of this on Hawaii (390/390X), so why didn't they? Perhaps 8Gb was more the "marketing boon" than the extra bandwith for the power GDDR5X dictates. Probably 4Gb of new GDDR5X was more costly verses 8Gb, used more power, and with 512-Bit it probably didn't see the performance jump to justify it.

I think we might see it with 14/14mmFf parts that hold to 192/256-bit bus a suffice fine on just 4Gb. I think they need to offset the work with the new process while in markets that 4Gb is all that's required. If it's a case of at resolutions that that truly can more than 4Gb, HBM2 has it beat.
 
Last edited:
Joined
Jun 13, 2012
Messages
1,388 (0.31/day)
Processor i7-13700k
Motherboard Asus Tuf Gaming z790-plus
Cooling Coolermaster Hyper 212 RGB
Memory Corsair Vengeance RGB 32GB DDR5 7000mhz
Video Card(s) Asus Dual Geforce RTX 4070 Super ( 2800mhz @ 1.0volt, ~60mhz overlock -.1volts)
Storage 1x Samsung 980 Pro PCIe4 NVme, 2x Samsung 1tb 850evo SSD, 3x WD drives, 2 seagate
Display(s) Acer Predator XB273u 27inch IPS G-Sync 165hz
Power Supply Corsair RMx Series RM850x (OCZ Z series PSU retired after 13 years of service)
Mouse Logitech G502 hero
Keyboard Logitech G710+
Now for Maxwell a year ago would've been late for sure, and/or for Maxwell with its memory compression and overall efficiency they could forgo supposed benefits in the price/margins they intended to be at. Though one would 've thought AMD could've made use of this on Hawaii (390/390X), so why didn't they? Perhaps 8Gb was more the "marketing boon" than the extra bandwith for the power GDDR5X dictates. Probably 4Gb of new GDDR5X was more costly verses 8Gb, used more power, and with 512-Bit it probably didn't see the performance jump to justify it.
Not knowing everything about GDDR5x would assume it would needed a change on the controller side of the chip to manage it properly, AMD likely didn't want to put $ in to it for a rebrand gpu.
 

Kanan

Tech Enthusiast & Gamer
Joined
Aug 22, 2015
Messages
3,517 (1.04/day)
Location
Europe
System Name eazen corp | Xentronon 7.2
Processor AMD Ryzen 7 3700X // PBO max.
Motherboard Asus TUF Gaming X570-Plus
Cooling Noctua NH-D14 SE2011 w/ AM4 kit // 3x Corsair AF140L case fans (2 in, 1 out)
Memory G.Skill Trident Z RGB 2x16 GB DDR4 3600 @ 3800, CL16-19-19-39-58-1T, 1.4 V
Video Card(s) Asus ROG Strix GeForce RTX 2080 Ti modded to MATRIX // 2000-2100 MHz Core / 1938 MHz G6
Storage Silicon Power P34A80 1TB NVME/Samsung SSD 830 128GB&850 Evo 500GB&F3 1TB 7200RPM/Seagate 2TB 5900RPM
Display(s) Samsung 27" Curved FS2 HDR QLED 1440p/144Hz&27" iiyama TN LED 1080p/120Hz / Samsung 40" IPS 1080p TV
Case Corsair Carbide 600C
Audio Device(s) HyperX Cloud Orbit S / Creative SB X AE-5 @ Logitech Z906 / Sony HD AVR @PC & TV @ Teufel Theater 80
Power Supply EVGA 650 GQ
Mouse Logitech G700 @ Steelseries DeX // Xbox 360 Wireless Controller
Keyboard Corsair K70 LUX RGB /w Cherry MX Brown switches
VR HMD Still nope
Software Win 10 Pro
Benchmark Scores 15 095 Time Spy | P29 079 Firestrike | P35 628 3DM11 | X67 508 3DM Vantage Extreme
Can you all not read? There's written on the presentation that it has reduced voltage, so power usage still goes down, not up - the straight opposite.

This is an alternative to GDDR5 for mainstream/lower cards of upcoming generation, nothing more. Cards can have a smaller memory controller, or cheaper memory controller compared to normal GDDR5 because of the higher bandwidth of these.
 
Joined
Feb 14, 2012
Messages
2,355 (0.50/day)
System Name msdos
Processor 8086
Motherboard mainboard
Cooling passive
Memory 640KB + 384KB extended
Video Card(s) EGA
Storage 5.25"
Display(s) 80x25
Case plastic
Audio Device(s) modchip
Power Supply 45 watts
Mouse serial
Keyboard yes
Software disk commander
Benchmark Scores still running
GDDR5X would functionally be on lower end hardware, taking the place of retread architectures. This is similar to how GDDR3 existed for quite some time after GDDR5 was implemented, no?

Except that there isn't a card with HBM that's showing a clean sweep advantage due to having used HBM. Gddr5 was a big step over gddr3, no questions asked.
 
Joined
Sep 7, 2011
Messages
2,785 (0.58/day)
Location
New Zealand
System Name MoneySink
Processor 2600K @ 4.8
Motherboard P8Z77-V
Cooling AC NexXxos XT45 360, RayStorm, D5T+XSPC tank, Tygon R-3603, Bitspower
Memory 16GB Crucial Ballistix DDR3-1600C8
Video Card(s) GTX 780 SLI (EVGA SC ACX + Giga GHz Ed.)
Storage Kingston HyperX SSD (128) OS, WD RE4 (1TB), RE2 (1TB), Cav. Black (2 x 500GB), Red (4TB)
Display(s) Achieva Shimian QH270-IPSMS (2560x1440) S-IPS
Case NZXT Switch 810
Audio Device(s) onboard Realtek yawn edition
Power Supply Seasonic X-1050
Software Win8.1 Pro
Benchmark Scores 3.5 litres of Pale Ale in 18 minutes.
Can you all not read? There's written on the presentation that it has reduced voltage, so power usage still goes down, not up - the straight opposite.
This is an alternative to GDDR5 for mainstream/lower cards of upcoming generation, nothing more. Cards can have a smaller memory controller, or cheaper memory controller compared to normal GDDR5 because of the higher bandwidth of these.
QFT, but I doubt many actually even glance at the presentation - which also states that the I/O pin count stays the same and that only a "limited effort" is required to update an existing GDDR5 IMC.
Maybe people are too pressed for time to actually read the info and do some basic research. Much easier to skim read (if that) and spend most of the available time writing preconceived nonsense.
 
Joined
Apr 2, 2011
Messages
2,810 (0.56/day)
Can you all not read? There's written on the presentation that it has reduced voltage, so power usage still goes down, not up - the straight opposite.

This is an alternative to GDDR5 for mainstream/lower cards of upcoming generation, nothing more. Cards can have a smaller memory controller, or cheaper memory controller compared to normal GDDR5 because of the higher bandwidth of these.

I think you've forgotten reality somewhere in the mix, so let's review it. You can read an article, and process the facts without detailing all the logical steps you've taken. Let's review:

1) GDDR5X is lower voltage (1.5 to 1.35, or a 10% decrease). Fact.
2) GDDR5X targets higher capacities. Extrapolated fact based upon the prefetch focus of the slides.
3) Generally speaking, GPU RAM is going to be better utilized, therefore low end cards are going to continue to have more RAM on them as a matter of course. Demonstrable reality.
4) Assume that RAM quantities only double. You're looking at a 10% decrease in voltage, with a 100% increase in count. That's a new power increase of 80% (insanely ballparked based off variable voltage and constant leakage, more likely to be 50-80% increase in reality). Reasonable assumption of net influence.

Therefore, GDDR5X is a net improvement gain per memory cell, but as more cells are required it's a net decrease overall. HBM has yet to show real improvements in performance, but you've placed the cart before the horse there. Fury managed to take less RAM and make it not only fit in a small thermal envelope, but actual perform well despite basically being strapped to a GPU otherwise not really meant for it (yay interposer).

While I respect you pointing out the obvious, perhaps you should ask what underlying issues you might be missing before asking if everyone is a dullard. I can't speak for everyone, but if I explained 100% of my mental processes nobody would ever listen. To be fair though, I can't say this is everyone's line of reasoning.
 
Joined
Sep 7, 2011
Messages
2,785 (0.58/day)
Location
New Zealand
System Name MoneySink
Processor 2600K @ 4.8
Motherboard P8Z77-V
Cooling AC NexXxos XT45 360, RayStorm, D5T+XSPC tank, Tygon R-3603, Bitspower
Memory 16GB Crucial Ballistix DDR3-1600C8
Video Card(s) GTX 780 SLI (EVGA SC ACX + Giga GHz Ed.)
Storage Kingston HyperX SSD (128) OS, WD RE4 (1TB), RE2 (1TB), Cav. Black (2 x 500GB), Red (4TB)
Display(s) Achieva Shimian QH270-IPSMS (2560x1440) S-IPS
Case NZXT Switch 810
Audio Device(s) onboard Realtek yawn edition
Power Supply Seasonic X-1050
Software Win8.1 Pro
Benchmark Scores 3.5 litres of Pale Ale in 18 minutes.
While I respect you pointing out the obvious, perhaps you should ask what underlying issues you might be missing before asking if everyone is a dullard.
I took Kanan's choice of words to mean that less of the GPU die need be devoted to its IMC's and memory pinout. For example, Tahiti's uncore accounts for more than half the GPU die area, and GDDR5 memory controllers and their I/O account for the greater portion of that.


Using the current Hawaii GPU* as a further example (since is architecturally similar) :

512 bit bus width / 8 * 6 GHz effective memory speed = 384 GB/sec of bandwidth.
Using GDDR5X on a future product with half the IMC's:
256 bit bus width / 8 * 10 -16 GHz effective memory speed = 320 - 512 GB/sec of bandwidth using half the number of IMC's and memory pinout die space.

So you effectively have saved die space and lowered the power envelope on parts whose prime consideration is production cost and sale price (which Kanen referenced). The only way this doesn't make sense is if HBM + Interposer + microbump packaging is less expensive than traditional BGA assembly - something I have not seen mentioned, nor believe to be the case.

* I don't believe Hawaii-type performance is where the product is aimed, but merely used as an example. A current 256-bit level of performance being able to utilize a 128-bit bus width is a more likely scenario.
 
Last edited:
Joined
Apr 2, 2011
Messages
2,810 (0.56/day)
I took Kanan's choice of words to mean that less of the GPU die need be devoted to its IMC's and memory pinout. For example, Tahiti's uncore accounts for more than half the GPU die area, and GDDR5 memory controllers and their I/O account for the greater portion of that.


Using the current Hawaii GPU* as a further example (since is architecturally similar) :

512 bit bus width / 8 * 6 GHz effective memory speed = 384 GB/sec of bandwidth.
Using GDDR5X on a future product with half the IMC's:
256 bit bus width / 8 * 10 -16 GHz effective memory speed = 320 - 512 GB/sec of bandwidth using half the number of IMC's and memory pinout die space.

So you effectively have saved die space and lowered the power envelope on parts whose prime consideration is production cost and sale price (which Kanen referenced). The only way this doesn't make sense is if HBM + Interposer + microbump packaging is less expensive than traditional BGA assembly - something I have not seen mentioned, nor believe to be the case.

* I don't believe Hawaii-type performance is where the product is aimed, but merely used as an example. A current 256-bit level of performance being able to utilize a 128-bit bus width is a more likely scenario.


I don't contest this point, other components may well be able to be made cheaper. My issue is (bolded below):

Can you all not read? There's written on the presentation that it has reduced voltage, so power usage still goes down, not up - the straight opposite.

This is an alternative to GDDR5 for mainstream/lower cards of upcoming generation, nothing more. Cards can have a smaller memory controller, or cheaper memory controller compared to normal GDDR5 because of the higher bandwidth of these.

I don't argue that it might be more cost effective to run GDDR5X (I believe I made that point myself). My point of contention is suggesting the GDDR5X will have a lower voltage, and thus will run cooler, is an issue. While strictly correct, the reality is three generations where 0.5-2 GB of VRAM was enough is disappearing. Thus the decrease in GDDR5X voltages will still represent a net real world increase in temperatures due to needing to pack more in.
 

Kanan

Tech Enthusiast & Gamer
Joined
Aug 22, 2015
Messages
3,517 (1.04/day)
Location
Europe
System Name eazen corp | Xentronon 7.2
Processor AMD Ryzen 7 3700X // PBO max.
Motherboard Asus TUF Gaming X570-Plus
Cooling Noctua NH-D14 SE2011 w/ AM4 kit // 3x Corsair AF140L case fans (2 in, 1 out)
Memory G.Skill Trident Z RGB 2x16 GB DDR4 3600 @ 3800, CL16-19-19-39-58-1T, 1.4 V
Video Card(s) Asus ROG Strix GeForce RTX 2080 Ti modded to MATRIX // 2000-2100 MHz Core / 1938 MHz G6
Storage Silicon Power P34A80 1TB NVME/Samsung SSD 830 128GB&850 Evo 500GB&F3 1TB 7200RPM/Seagate 2TB 5900RPM
Display(s) Samsung 27" Curved FS2 HDR QLED 1440p/144Hz&27" iiyama TN LED 1080p/120Hz / Samsung 40" IPS 1080p TV
Case Corsair Carbide 600C
Audio Device(s) HyperX Cloud Orbit S / Creative SB X AE-5 @ Logitech Z906 / Sony HD AVR @PC & TV @ Teufel Theater 80
Power Supply EVGA 650 GQ
Mouse Logitech G700 @ Steelseries DeX // Xbox 360 Wireless Controller
Keyboard Corsair K70 LUX RGB /w Cherry MX Brown switches
VR HMD Still nope
Software Win 10 Pro
Benchmark Scores 15 095 Time Spy | P29 079 Firestrike | P35 628 3DM11 | X67 508 3DM Vantage Extreme
Still GDDR5X needs less power for the same. And thats what I meant - and I'm pretty sure the others said the opposite, so I disagreed. What you do is basically speculating that Ram is doubled again on mainstream cards - what I'am not quite convinced will happen. 8 GB Ram is a lot, and I don't see weaker cards than a 390 running 8 GB of (GDDR5X) Ram. And even if they would, this has nothing to do with what I was talking about - doubling the amount and therefore increasing total power has nothing to do with the Ram type itself being more efficient than GDDR5, this is a moot point.

HBM has yet to show real improvements in performance, but you've placed the cart before the horse there.

I didn't talk about HBM one bit. But you're still wrong, HBM has already proven itself on Fiji cards - it's a fact that they would be even more bandwidth limited without HBM, as overclocking HBM increased performance of every Fiji card further. This has proven that Fiji has not too much bandwidth, it has proven that it can't have enough bandwidth. Besides HBM made Fiji possible - there would be no Fiji without HBM. The same chip with GDDR5 would've taken more than 300 W TDP, a no-go, or would have needed a lower clock speed, which is a no-go too, because AMD wanted to achieve Titan X-like performance. The 275W TDP of Fury X was only possible with HBM on it. So not only DID the increased bandwidth help (a lot), it helped making the whole card possible at all.
 
Joined
Apr 2, 2011
Messages
2,810 (0.56/day)
Still GDDR5X needs less power for the same. And thats what I meant - and I'm pretty sure the others said the opposite, so I disagreed. What you do is basically speculating that Ram is doubled again on mainstream cards - what I'am not quite convinced will happen. 8 GB Ram is a lot, and I don't see weaker cards than a 390 running 8 GB of (GDDR5X) Ram. And even if they would, this has nothing to do with what I was talking about - doubling the amount and therefore increasing total power has nothing to do with the Ram type itself being more efficient than GDDR5, this is a moot point.



I didn't talk about HBM one bit. But you're still wrong, HBM has already proven itself on Fiji cards - it's a fact that they would be even more bandwidth limited without HBM, as overclocking HBM increased performance of every Fiji card further. This has proven that Fiji has not too much bandwidth, it has proven that it can't have enough bandwidth. Besides HBM made Fiji possible - there would be no Fiji without HBM. The same chip with GDDR5 would've taken more than 300 W TDP, a no-go, or would have needed a lower clock speed, which is a no-go too, because AMD wanted to achieve Titan X-like performance. The 275W TDP of Fury X was only possible with HBM on it. So not only DID the increased bandwidth help (a lot), it helped making the whole card possible at all.

Allow me a little exercise, and an apology. First, the apology. I conflated the quote from @xorbe with what you said. It was incorrect, and my apologies for that.

Next, let's review the quotes.
Can you all not read? There's written on the presentation that it has reduced voltage, so power usage still goes down, not up - the straight opposite...

A very true point, yet somewhat backwards. Let's review the slides you criticize others for not reading. In particular, let's review 94a. The way that the memory supposedly doubles bandwidth is a much larger prefetch. That's all fine and dandy, but if I'm reading this correctly that means the goal is to have more RAM to cover the increasing prefetch. This would generally imply that more RAM would produce better results, as more data can be stored with prefetching accounting for an effectively increased bandwidth.

Likewise, everyone wants more RAM. Five years ago a 1 GB card was high end, while today an 8 GB card is high end. Do you really expect that trend to not continue? Do you think somebody out there is going to decide that 2 GB (what we've got now on middle ground cards) is enough? I'd say that was insane, but I'd prefer not to have it taken as an insult. For GDDR5X to perform better than GDDR5 you'd have to have it be comparable, but that isn't what sells cards. You get somebody to upgrade cards by offering more and better, which is easily demonstrable when you can say we doubled the RAM. While in engineering land that doesn't mean squat, it only matters that people want to fork their money over for what is perceived to be better.


... But you're still wrong, HBM has already proven itself on Fiji cards - it's a fact that they would be even more bandwidth limited without HBM, as overclocking HBM increased performance of every Fiji card further. This has proven that Fiji has not too much bandwidth, it has proven that it can't have enough bandwidth. Besides HBM made Fiji possible - there would be no Fiji without HBM. The same chip with GDDR5 would've taken more than 300 W TDP, a no-go, or would have needed a lower clock speed, which is a no-go too, because AMD wanted to achieve Titan X-like performance. The 275W TDP of Fury X was only possible with HBM on it. So not only DID the increased bandwidth help (a lot), it helped making the whole card possible at all.

To the former point, prove it. The statement that overclocking increases performance is...I'm going to call it a statement so obvious as to be useless. The reality is that any data I can find on Fiji suggests that overclocking the core vastly outweighs the benefits of clocking the HBM alone (this thread might be of use:http://www.techpowerup.com/forums/threads/confirmed-overclocking-hbm-performance-test.214022/ ). If you can demonstrate that HBM overclocking substantially (let's say 5:4, instead of a better 1:1 ratio) performance I'll eat my words. What I've seen is sub 2:1, which in my book says that the HBM bandwidth is sufficient for the cards, despite its being the more limited HBM1.

To the later point, it's not addressing the issue. AMD wanting overall performance to be similar to that of Titan didn't require HBM, as demonstrated by Titan. What it required was an investment in hardware design, rather than re-releasing the same architecture with minor improvements, a higher clock, and more thermal output. I love AMD, but they're seriously behind the ball here. The 7xxx, 2xx, and 3xx cards follow that path to a T. While the 7xxx series was great, Nvidia invested money back into R&D to produce a genuinely cooler, less power hungry, and better overclocking 9xx series of cards. Nvidia proved that Titan level performance could easily be had with GDDR5. Your argument is just silly.


As to the final point, we agree that this is a low end card thing. Between being perfect for retread architectures, cheap to redesign, and being more efficient in direct comparison it's a dream for the middle range 1080p crowd. It'll be cheap enough to give people 4+ GB (double the 2 GB that people are currently dealing with on middle end cards) of RAM, but that's all it's doing better.

Consider me unimpressed with GDDR5X, but hopeful that it continues to exist. AMD has basically developed HBM from the ground up, so they aren't going to use GDDR5X on anything but cheap cards. Nvidia hopefully will make it a staple of their low end cards, so HBM always has enough competition to keep it honest. Do I believe we'll see GDDR5X in any real capacity; no. 94a says that GDDR5X will be coming out around the same time as HBM2. That means they'll be fighting to get a seat at the Pascal table, when almost everything is supposedly already set for HBM2. That's a heck of a fight, even for a much cheaper product.
 
Joined
Apr 19, 2011
Messages
2,198 (0.44/day)
Location
So. Cal.
Can you all not read? There's written on the presentation that it has reduced voltage, so power usage still goes down, not up - the straight opposite.
Man a guy writes three words and the venom erupts!
than the extra bandwith for the power GDDR5X dictates. Probably 4Gb of new GDDR5X was more costly verses 8Gb, used more power
My thinking was more what the overall gain to the perf/watt... Regrettably, those three words weren't deliberately vigilant, but then again I'm not here for this to be my "life's work", or the "keeper" of all things regarding correctness.

The slide said:
Lower VDD, VDDQ for reduced power
• VDD, VDDQ = 1.35V for reduced energy/bit
• GDDR5 cannot abandon VDD, VDDQ = 1.5V due to legacy

Now as that would imply being more energy-efficient, but can't tell us the amount saved offers a reasonable gain in real world performance.
more energy-efficient

As for memory controller it said:
• The GDDR5 command protocol remains preserved as much as possible
• GDDR5 ecosystem is untouched

Though on the next slide is states:
Targeting a limited effort to upgrade a GDDR5 memory controller.

That sound like existing memory controllers would need some tweaking, and correct no one would see the value of re-working the chips controller when all they need is the chip hold out 6-8 months.
 
Joined
Sep 7, 2011
Messages
2,785 (0.58/day)
Location
New Zealand
System Name MoneySink
Processor 2600K @ 4.8
Motherboard P8Z77-V
Cooling AC NexXxos XT45 360, RayStorm, D5T+XSPC tank, Tygon R-3603, Bitspower
Memory 16GB Crucial Ballistix DDR3-1600C8
Video Card(s) GTX 780 SLI (EVGA SC ACX + Giga GHz Ed.)
Storage Kingston HyperX SSD (128) OS, WD RE4 (1TB), RE2 (1TB), Cav. Black (2 x 500GB), Red (4TB)
Display(s) Achieva Shimian QH270-IPSMS (2560x1440) S-IPS
Case NZXT Switch 810
Audio Device(s) onboard Realtek yawn edition
Power Supply Seasonic X-1050
Software Win8.1 Pro
Benchmark Scores 3.5 litres of Pale Ale in 18 minutes.
That sound like existing memory controllers would need some tweaking, and correct no one would see the value of re-working the chips controller when all they need is the chip hold out 6-8 months.
You are making a couple of assumptions.
Firstly, whose to say that a chip has a lifetime of 6-8 months? Most GPUs in production have lifetimes much longer than that, and with process node cycles likely to be longer, I'd doubt that GPUs would suddenly have their lifetime shortened.
Secondly, memory controllers and the GDDR5 PHY ( the physical layer between the IMC's and the memory chips) are routinely revised - and indeed, reused from across generations of architectures. One of the more recent cases of a high profile/high sales implementation was that initiated on Kepler ( GTX 680's GK104-400 to GTX 770's GK104-425) and is presently used in Maxwell. AMD undoubtedly reuse their memory controllers and PHY as well. It is quite conceivable (and very likely) that future architectures such as Arctic Islands and Pascal will be mated with the GDDR5/GDDR5X logic blocks in addition to HBM, since increased costs associated with the latter ( interposer, micro-bumping packaging, assembly, and verification) would make a $100-175 card much less viable even if the GPU were combined with a single stack of HBM memory.
 

Kanan

Tech Enthusiast & Gamer
Joined
Aug 22, 2015
Messages
3,517 (1.04/day)
Location
Europe
System Name eazen corp | Xentronon 7.2
Processor AMD Ryzen 7 3700X // PBO max.
Motherboard Asus TUF Gaming X570-Plus
Cooling Noctua NH-D14 SE2011 w/ AM4 kit // 3x Corsair AF140L case fans (2 in, 1 out)
Memory G.Skill Trident Z RGB 2x16 GB DDR4 3600 @ 3800, CL16-19-19-39-58-1T, 1.4 V
Video Card(s) Asus ROG Strix GeForce RTX 2080 Ti modded to MATRIX // 2000-2100 MHz Core / 1938 MHz G6
Storage Silicon Power P34A80 1TB NVME/Samsung SSD 830 128GB&850 Evo 500GB&F3 1TB 7200RPM/Seagate 2TB 5900RPM
Display(s) Samsung 27" Curved FS2 HDR QLED 1440p/144Hz&27" iiyama TN LED 1080p/120Hz / Samsung 40" IPS 1080p TV
Case Corsair Carbide 600C
Audio Device(s) HyperX Cloud Orbit S / Creative SB X AE-5 @ Logitech Z906 / Sony HD AVR @PC & TV @ Teufel Theater 80
Power Supply EVGA 650 GQ
Mouse Logitech G700 @ Steelseries DeX // Xbox 360 Wireless Controller
Keyboard Corsair K70 LUX RGB /w Cherry MX Brown switches
VR HMD Still nope
Software Win 10 Pro
Benchmark Scores 15 095 Time Spy | P29 079 Firestrike | P35 628 3DM11 | X67 508 3DM Vantage Extreme
Allow me a little exercise, and an apology. First, the apology. I conflated the quote from @xorbe with what you said. It was incorrect, and my apologies for that.
Np.
Do you really expect that trend to not continue? Do you think somebody out there is going to decide that 2 GB (what we've got now on middle ground cards) is enough? I'd say that was insane, but I'd prefer not to have it taken as an insult.
No, sorry for not clarifying it correctly - my answer was only about the next gen of cards, coming 2016 - what I think is that 4 GB for the lower cards and 8 GB for the middle cards will suffice, so it's hardly an increase. The top end cards can have all between 8 and 16 - that's not entirely clear at the moment, because 8 GB is a lot and will still be in 2016, I'm pretty sure of that. But other than that I expect the trend to continue (more need of Vram), just that it won't be 2016, rather 2017.
1 GB cards were good for 3-5 years or until now, depending on game, but I'd say at least 3 years. I used a HD 5970 for a long time and it has only 1 GB Vram - so don't overestimate how much Vram is needed. That said, 4 or 8 GB will be perfectly fine in 2016, as long as you don't play 4K with highest details on a 4GB card - and even that will depend on game. But 6 GB (with compression, like 980 Ti) or 8 GB (like R9 390X) will suffice.
For GDDR5X to perform better than GDDR5 you'd have to have it be comparable, but that isn't what sells cards. You get somebody to upgrade cards by offering more and better, which is easily demonstrable when you can say we doubled the RAM. While in engineering land that doesn't mean squat, it only matters that people want to fork their money over for what is perceived to be better.
Sadly, this is true. But I don't care about it much - we are here to speak about what is the truth or what is really needed and not what the average users out there think is better. They think a lot and 50-90% of it is wrong...
To the former point, prove it. The statement that overclocking increases performance is...I'm going to call it a statement so obvious as to be useless. The reality is that any data I can find on Fiji suggests that overclocking the core vastly outweighs the benefits of clocking the HBM alone (this thread might be of use:http://www.techpowerup.com/forums/threads/confirmed-overclocking-hbm-performance-test.214022/ ). If you can demonstrate that HBM overclocking substantially (let's say 5:4, instead of a better 1:1 ratio) performance I'll eat my words. What I've seen is sub 2:1, which in my book says that the HBM bandwidth is sufficient for the cards, despite its being the more limited HBM1.
I never said HBM overclocking brings a lot of performance. I just said it DOES, and this is the important thing - because a bandwidth saturated card would never gain any FPS from overclocking the Vram. So HBM is good, it helps - don't forget AMD compression is a lot worse than what NV has on Maxwell, so they NEED lots of bandwidth on their cards. This is why Hawaii has 512 Bit at first with only 1250 clocked Ram and now with 1500 MHz - because it never has enough, and this is why Fiji needs even more, because it is a more powerful chip - so it got HBM. Alternative would have been 1750 MHz clocked GDDR5 on a 512 bit bus like with Hawaii, but that would've sucked too much power and still be inferior in terms of maximum bandwidth.
To the later point, it's not addressing the issue. AMD wanting overall performance to be similar to that of Titan didn't require HBM, as demonstrated by Titan. What it required was an investment in hardware design, rather than re-releasing the same architecture with minor improvements, a higher clock, and more thermal output. I love AMD, but they're seriously behind the ball here.
This is not about what AMD should've had, it is about what AMD has (only GCN 1.2 architecture) and what they could do with it. HBM made a big big GCN possible - without it, no, never possible. And this is what I meant. From my perspective it is adressing the issue, even if it's not directly. It helped a older architecture go big and gave AMD a strong performer they otherwise would've missed, because Fiji with GDDR5 would have been impossible - too high TDP, they couldn't have done that, it would've been awkward.
And btw. they missed the money to develop a new architecture like Nvidia, I think.

The 7xxx, 2xx, and 3xx cards follow that path to a T. While the 7xxx series was great, Nvidia invested money back into R&D to produce a genuinely cooler, less power hungry, and better overclocking 9xx series of cards. Nvidia proved that Titan level performance could easily be had with GDDR5. Your argument is just silly.
And therefore my argument is the opposite of silly. First try to understand before you run out and call anyone "silly". You are very quick to judge or harass persons, not the first time you've done that - you do it way too frequently, and shouldn't do it at all.
As to the final point, we agree that this is a low end card thing. Between being perfect for retread architectures, cheap to redesign, and being more efficient in direct comparison it's a dream for the middle range 1080p crowd. It'll be cheap enough to give people 4+ GB (double the 2 GB that people are currently dealing with on middle end cards) of RAM, but that's all it's doing better.
I only see HBM1/2 for highend cards now or in 2016 because its too expensive for lower cards - therefore it's (GDDR5X) good for middle to semi highend cards I'd say. I even would bet on that. You really expect a premier technology to be used on middle to semi high end cards? Me not. GDDR5X has a nice gap there to fill, I think.
Do I believe we'll see GDDR5X in any real capacity; no. 94a says that GDDR5X will be coming out around the same time as HBM2. That means they'll be fighting to get a seat at the Pascal table, when almost everything is supposedly already set for HBM2. That's a heck of a fight, even for a much cheaper product.
Well that's no problem. If it arrives with HBM2, they can plan for it and produce cards with it (GDDR5X). I don't see why this would be a problem. And as I said, I only see HBM2 on highend or highest end (650$) cards. I think the 400 (if any)/500/650$ cards will have it - so everything cheaper than that will be GDDR5X, and that's not only "cheap cards". I don't see 200-400 as "cheap". Really cheap is 150 or less and this is GDDR5-land (not even GDDR5X) I think.
 
Top