• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

delete this because less than half the people around here understand or even want to

will AMD outperform NV while using DX12?


  • Total voters
    67
Status
Not open for further replies.
Joined
Jan 2, 2015
Messages
1,099 (0.31/day)
Processor FX6350@4.2ghz-i54670k@4ghz
Video Card(s) HD7850-R9290
https://www.youtube.com/watch?t=44&v=v3dUhep0rBs
cb.PNG
asd.PNG

390980.PNG
 
Last edited:

MxPhenom 216

ASIC Engineer
Joined
Aug 31, 2010
Messages
12,991 (2.54/day)
Location
Loveland, CO
System Name Ryzen Reflection
Processor AMD Ryzen 9 5900x
Motherboard Gigabyte X570S Aorus Master
Cooling 2x EK PE360 | TechN AM4 AMD Block Black | EK Quantum Vector Trinity GPU Nickel + Plexi
Memory Teamgroup T-Force Xtreem 2x16GB B-Die 3600 @ 14-14-14-28-42-288-2T 1.45v
Video Card(s) Zotac AMP HoloBlack RTX 3080Ti 12G | 950mV 1950Mhz
Storage WD SN850 500GB (OS) | Samsung 980 Pro 1TB (Games_1) | Samsung 970 Evo 1TB (Games_2)
Display(s) Asus XG27AQM 240Hz G-Sync Fast-IPS | Gigabyte M27Q-P 165Hz 1440P IPS | Asus 24" IPS (portrait mode)
Case Lian Li PC-011D XL | Custom cables by Cablemodz
Audio Device(s) FiiO K7 | Sennheiser HD650 + Beyerdynamic FOX Mic
Power Supply Seasonic Prime Ultra Platinum 850
Mouse Razer Viper v2 Pro
Keyboard Razer Huntsman Tournament Edition
Software Windows 11 Pro 64-Bit
Why do you make these threads and post AMD slideshow shots?
 
Joined
Jan 2, 2015
Messages
1,099 (0.31/day)
Processor FX6350@4.2ghz-i54670k@4ghz
Video Card(s) HD7850-R9290
because some people just go on ignoring what the people that actually make the stuff say.. highly cynical
 
Joined
Oct 2, 2004
Messages
13,791 (1.90/day)
Technically speaking, AMD had very advanced GPU's and they still are very advanced with certain goodies NVIDIA just started using. But how will this reflect THIS very moment, no one really knows until real DX12 games show up. But we need to first get Windows 10 so there's that...
 
Joined
Jan 2, 2015
Messages
1,099 (0.31/day)
Processor FX6350@4.2ghz-i54670k@4ghz
Video Card(s) HD7850-R9290

Attachments

  • gcn.PNG
    gcn.PNG
    105.2 KB · Views: 470
Joined
Jul 19, 2006
Messages
43,596 (6.59/day)
Processor AMD Ryzen 7 7800X3D
Motherboard ASUS TUF x670e
Cooling EK AIO 360. Phantek T30 fans.
Memory 32GB G.Skill 6000Mhz
Video Card(s) Asus RTX 4090
Storage WD m.2
Display(s) LG C2 Evo OLED 42"
Case Lian Li PC 011 Dynamic Evo
Audio Device(s) Topping E70 DAC, SMSL SP200 Headphone Amp.
Power Supply FSP Hydro Ti PRO 1000W
Mouse Razer Basilisk V3 Pro
Keyboard Tester84
Software Windows 11
Until people actually start using DX12, it's all theory-crafting.
 

Aquinus

Resident Wat-man
Joined
Jan 28, 2012
Messages
13,153 (2.86/day)
Location
Concord, NH, USA
System Name Apollo
Processor Intel Core i9 9880H
Motherboard Some proprietary Apple thing.
Memory 64GB DDR4-2667
Video Card(s) AMD Radeon Pro 5600M, 8GB HBM2
Storage 1TB Apple NVMe, 4TB External
Display(s) Laptop @ 3072x1920 + 2x LG 5k Ultrafine TB3 displays
Case MacBook Pro (16", 2019)
Audio Device(s) AirPods Pro, Sennheiser HD 380s w/ FIIO Alpen 2, or Logitech 2.1 Speakers
Power Supply 96w Power Adapter
Mouse Logitech MX Master 3
Keyboard Logitech G915, GL Clicky
Software MacOS 12.1
Isn't this the third or fourth thread I've seen xfia post about AMD in the last 2 days? I think someone is too anxious for a review to calm down and wait. :)
 

AsRock

TPU addict
Joined
Jun 23, 2007
Messages
18,977 (3.03/day)
Location
UK\USA
I don't believe it will, it may for a little while but nVidia will put HBM memory on their cards sooner or later which will give the boost what AMD have gotten.

Maybe if AMD have some thing to really improve their design before or when nVidia add HBM but we will have to wait and see.

Either way if we get 14nm their be boost all round, but as we know AMD and nVidia have no control over that.
 

MxPhenom 216

ASIC Engineer
Joined
Aug 31, 2010
Messages
12,991 (2.54/day)
Location
Loveland, CO
System Name Ryzen Reflection
Processor AMD Ryzen 9 5900x
Motherboard Gigabyte X570S Aorus Master
Cooling 2x EK PE360 | TechN AM4 AMD Block Black | EK Quantum Vector Trinity GPU Nickel + Plexi
Memory Teamgroup T-Force Xtreem 2x16GB B-Die 3600 @ 14-14-14-28-42-288-2T 1.45v
Video Card(s) Zotac AMP HoloBlack RTX 3080Ti 12G | 950mV 1950Mhz
Storage WD SN850 500GB (OS) | Samsung 980 Pro 1TB (Games_1) | Samsung 970 Evo 1TB (Games_2)
Display(s) Asus XG27AQM 240Hz G-Sync Fast-IPS | Gigabyte M27Q-P 165Hz 1440P IPS | Asus 24" IPS (portrait mode)
Case Lian Li PC-011D XL | Custom cables by Cablemodz
Audio Device(s) FiiO K7 | Sennheiser HD650 + Beyerdynamic FOX Mic
Power Supply Seasonic Prime Ultra Platinum 850
Mouse Razer Viper v2 Pro
Keyboard Razer Huntsman Tournament Edition
Software Windows 11 Pro 64-Bit
I don't believe it will, it may for a little while but nVidia will put HBM memory on their cards sooner or later which will give the boost what AMD have gotten.

Maybe if AMD have some thing to really improve their design before or when nVidia add HBM but we will have to wait and see.

Either way if we get 14nm their be boost all round, but as we know AMD and nVidia have no control over that.

I dont think its the HBM memory. Its GPU chip architecture.
 

AsRock

TPU addict
Joined
Jun 23, 2007
Messages
18,977 (3.03/day)
Location
UK\USA
I dont think its the HBM memory. Its GPU chip architecture.

Thing is i bet nVidia have a pretty good plan for the next design, and lets face it all so has the money to try rush plans too if needed.

But yes your right it wont be all the HBM that's making the improvements.

This HBM will help AMD for the 1st 6 or so months even more so with those who want compact computers but nVidia will get their hands on it and dump tons of money in to it.
 

MxPhenom 216

ASIC Engineer
Joined
Aug 31, 2010
Messages
12,991 (2.54/day)
Location
Loveland, CO
System Name Ryzen Reflection
Processor AMD Ryzen 9 5900x
Motherboard Gigabyte X570S Aorus Master
Cooling 2x EK PE360 | TechN AM4 AMD Block Black | EK Quantum Vector Trinity GPU Nickel + Plexi
Memory Teamgroup T-Force Xtreem 2x16GB B-Die 3600 @ 14-14-14-28-42-288-2T 1.45v
Video Card(s) Zotac AMP HoloBlack RTX 3080Ti 12G | 950mV 1950Mhz
Storage WD SN850 500GB (OS) | Samsung 980 Pro 1TB (Games_1) | Samsung 970 Evo 1TB (Games_2)
Display(s) Asus XG27AQM 240Hz G-Sync Fast-IPS | Gigabyte M27Q-P 165Hz 1440P IPS | Asus 24" IPS (portrait mode)
Case Lian Li PC-011D XL | Custom cables by Cablemodz
Audio Device(s) FiiO K7 | Sennheiser HD650 + Beyerdynamic FOX Mic
Power Supply Seasonic Prime Ultra Platinum 850
Mouse Razer Viper v2 Pro
Keyboard Razer Huntsman Tournament Edition
Software Windows 11 Pro 64-Bit
Thing is i bet nVidia have a pretty good plan for the next design, and lets face it all so has the money to try rush plans too if needed.

But yes your right it wont be all the HBM that's making the improvements.

This HBM will help AMD for the 1st 6 or so months even more so with those who want compact computers but nVidia will get their hands on it and dump tons of money in to it.

I think Nvidia has had there hands on it ever since they announced the volta/pascal generation of GPUs (Nvidia CEO had the GPU module in his hand with the HBM memory and all). They were the first ones to talk about it in their conferences, but then AMD beat them to actually having a GPU ready with it implemented.
 

AsRock

TPU addict
Joined
Jun 23, 2007
Messages
18,977 (3.03/day)
Location
UK\USA
I think Nvidia has had there hands on it ever since they announced the volta/pascal generation of GPUs (Nvidia CEO had the GPU module in his hand with the HBM memory and all). They were the first ones to talk about it in their conferences, but then AMD beat them to actually having a GPU ready with it implemented.

Totally not surprised all so thinking their after more than 4GB.
 
Joined
Jun 25, 2010
Messages
854 (0.17/day)
Correct me if I am wrong but doesn't a higher draw call only allow for more stuff to be on screen? If that is the case then the bottleneck will most likely be something else other than the draw calls and not giving any advantage to either AMD or NVidia.

The way I understand it is if you want to take advantage of the massive increase to draw calls in DX12 you need to have less detail overall to keep the GPU from struggling to render all the stuff on screen. I would bet that's why we see an RTS space sim as a draw call benchmark. The 3DMark draw call benchmark has very little geometry or texture and another DX12 game I found was a ground based RTS.

Plus don't most game creators make games based on the lowest common denominator?

If I am right the improved draw calls DX12 brings creates the potential for better looking games but unless a game is bottlenecked by by the draw call/s then I don't expect AMD to get any "unlocked" power and leave NVidia in the dust.

But I could be wrong. It did happen once before ;) :p
 

rtwjunkie

PC Gaming Enthusiast
Supporter
Joined
Jul 25, 2008
Messages
13,916 (2.37/day)
Location
Louisiana
Processor Core i7-8700k 4.4Ghz @ 1.18v
Motherboard ASRock Z390 Phantom Gaming 6
Cooling All air: 2x140mm Fractal exhaust; 3x 140mm Cougar Intake; Enermax T40F Black CPU cooler
Memory 2x 16GB Mushkin Redline DDR-4 3200
Video Card(s) EVGA RTX 2080 Ti Xc (awaiting wkd to install ASUS RTX 4070 Ti Super 16GB)
Storage 1x 1TB MX500 SSD; 1x 6TB WD Black; 2x 4TB WD Black; 1x 1TB BX500 SSD; 1x 6TB WD Blue storage (eSATA)
Display(s) Infievo 27" 165Hz @ 2560 x 1440
Case Fractal Design Define R4 Black -windowed
Audio Device(s) Soundblaster Z
Power Supply Seasonic X-850
Mouse Coolermaster Sentinel III (large palm grip!)
Keyboard Logitech G610 Orion mechanical (Cherry Brown switches)
Software Windows 10 Pro 64-bit (Start10 & Fences 3.0 installed)
Isn't this the third or fourth thread I've seen xfia post about AMD in the last 2 days? I think someone is too anxious for a review to calm down and wait. :)

Xfia lives, breathes and has AMD pumped in intraveneously.
 
Joined
Nov 9, 2010
Messages
5,672 (1.13/day)
System Name Space Station
Processor Intel 13700K
Motherboard ASRock Z790 PG Riptide
Cooling Arctic Liquid Freezer II 420
Memory Corsair Vengeance 6400 2x16GB @ CL34
Video Card(s) PNY RTX 4080
Storage SSDs - Nextorage 4TB, Samsung EVO 970 500GB, Plextor M5Pro 128GB, HDDs - WD Black 6TB, 2x 1TB
Display(s) LG C3 OLED 42"
Case Corsair 7000D Airflow
Audio Device(s) Yamaha RX-V371
Power Supply SeaSonic Vertex 1200w Gold
Mouse Razer Basilisk V3
Keyboard Bloody B840-LK
Software Windows 11 Pro 23H2
Going forward, I think the creators of Pci Express need to seriously consider abandoning it instead of updating it, for a faster interlink between GPU and CPU, like Nvidia is trying to do with NVLink. It really needs to happen by someone OTHER than a GPU designer to make it unilateral though.

IMB is the single biggest benefactor of NVlink of the four founders of Pci Express. If that kind of thing is to go mainstream on consumer MBs though, it really needs to be backed by both Intel and AMD, the latter of which is unlikely with Nvidia at the helm.

So right now I'd like to see Intel and IMB get together and come up with a successor to Pci Ex. If that happens I'm sure Dell and HP will jump on board.

As for draw calls and bottlenecks. The lower level API via directly sinking with GPU architecture made to utilize it allows more draw calls by the GPU, but there will also be less demand on the CPU due to consolidation of instruction sets. MS back with their Forza Dx12 demo explained they will now be able to cache repeat instruction sets, vs issue them over and over like before.

In laymen's terms, what used to be almost entirely manually issued instructions will be heavily automated with Dx12.
 
Last edited by a moderator:

revanchrist

New Member
Joined
Jun 1, 2014
Messages
26 (0.01/day)
I think Nvidia has had there hands on it ever since they announced the volta/pascal generation of GPUs (Nvidia CEO had the GPU module in his hand with the HBM memory and all). They were the first ones to talk about it in their conferences, but then AMD beat them to actually having a GPU ready with it implemented.

You speak as if Nvidia invented HBM while in truth it was AMD who came out with the idea of HBM and they partnered with SK Hynix to develop HBM technology since 2010. HBM is the fruit of a 5-year long of investment from AMD. True, SK Hynix is selling HBM memory to Nvidia for their Pascal generation of GPUs but that doesn't make Nvidia the "pioneer" there.
 
Joined
Oct 2, 2004
Messages
13,791 (1.90/day)
Hm, does AMD have any licensing/patents involved in it from the Hynix partnership in development of this tech? Would be smart to have it, in the end, they'd be earning a little from every HBM chip sold to 3rd parties...
 

btarunr

Editor & Senior Moderator
Staff member
Joined
Oct 9, 2007
Messages
46,926 (7.61/day)
Location
Hyderabad, India
System Name RBMK-1000
Processor AMD Ryzen 7 5700G
Motherboard ASUS ROG Strix B450-E Gaming
Cooling DeepCool Gammax L240 V2
Memory 2x 8GB G.Skill Sniper X
Video Card(s) Palit GeForce RTX 2080 SUPER GameRock
Storage Western Digital Black NVMe 512GB
Display(s) BenQ 1440p 60 Hz 27-inch
Case Corsair Carbide 100R
Audio Device(s) ASUS SupremeFX S1220A
Power Supply Cooler Master MWE Gold 650W
Mouse ASUS ROG Strix Impact
Keyboard Gamdias Hermes E2
Software Windows 11 Pro
Yes it will. And it will give AMD CPUs a nice advantage. That's why Pascal will feature simpler CUDA cores than Maxwell, and will have higher degree of parallelism overall. DX12's biggest feature is async shader and native multi-GPU. So you can pair a Fury X with a Titan X in multi-GPU and troll fanboys.
 

Mussels

Freshwater Moderator
Staff member
Joined
Oct 6, 2004
Messages
58,413 (8.04/day)
Location
Oystralia
System Name Rainbow Sparkles (Power efficient, <350W gaming load)
Processor Ryzen R7 5800x3D (Undervolted, 4.45GHz all core)
Motherboard Asus x570-F (BIOS Modded)
Cooling Alphacool Apex UV - Alphacool Eisblock XPX Aurora + EK Quantum ARGB 3090 w/ active backplate
Memory 2x32GB DDR4 3600 Corsair Vengeance RGB @3866 C18-22-22-22-42 TRFC704 (1.4V Hynix MJR - SoC 1.15V)
Video Card(s) Galax RTX 3090 SG 24GB: Underclocked to 1700Mhz 0.750v (375W down to 250W))
Storage 2TB WD SN850 NVME + 1TB Sasmsung 970 Pro NVME + 1TB Intel 6000P NVME USB 3.2
Display(s) Phillips 32 32M1N5800A (4k144), LG 32" (4K60) | Gigabyte G32QC (2k165) | Phillips 328m6fjrmb (2K144)
Case Fractal Design R6
Audio Device(s) Logitech G560 | Corsair Void pro RGB |Blue Yeti mic
Power Supply Fractal Ion+ 2 860W (Platinum) (This thing is God-tier. Silent and TINY)
Mouse Logitech G Pro wireless + Steelseries Prisma XL
Keyboard Razer Huntsman TE ( Sexy white keycaps)
VR HMD Oculus Rift S + Quest 2
Software Windows 11 pro x64 (Yes, it's genuinely a good OS) OpenRGB - ditch the branded bloatware!
Benchmark Scores Nyooom.
Why oh why does this thread have such a flamebate title and options?
Why not "do you think AMD will gain a big advantage" or "Will this help AMD close the performance gap?"

Yeah, seeing the results from things like star swarm and 3Dmark make me think AMD has been planning this a long time.

http://i.imgur.com/20ytdvO.png


Those are someone elses results, but very similar to mine (Due to a bug with 3Dmark i cant test DX12 on this build of W10)
Long story short: Nvidia has better multi threaded DX11. in DX12, AMD seem to be skyrocketing ahead, and guess which company sells CPU's with lower single threaded clock speeds, but lots of cores?
 
Joined
Oct 2, 2004
Messages
13,791 (1.90/day)
VIA ? ;) AMD certainly has more experience with async shaders, they've been using them for way longer than NVIDIA (or anyone else).
 
Joined
Apr 25, 2013
Messages
127 (0.03/day)
No surprise if Dx12 unlock amd's performance. DX12 is literately Microsoft's Mantle by the way.
But I doubt that they can leave nVidia in dust
 

Mussels

Freshwater Moderator
Staff member
Joined
Oct 6, 2004
Messages
58,413 (8.04/day)
Location
Oystralia
System Name Rainbow Sparkles (Power efficient, <350W gaming load)
Processor Ryzen R7 5800x3D (Undervolted, 4.45GHz all core)
Motherboard Asus x570-F (BIOS Modded)
Cooling Alphacool Apex UV - Alphacool Eisblock XPX Aurora + EK Quantum ARGB 3090 w/ active backplate
Memory 2x32GB DDR4 3600 Corsair Vengeance RGB @3866 C18-22-22-22-42 TRFC704 (1.4V Hynix MJR - SoC 1.15V)
Video Card(s) Galax RTX 3090 SG 24GB: Underclocked to 1700Mhz 0.750v (375W down to 250W))
Storage 2TB WD SN850 NVME + 1TB Sasmsung 970 Pro NVME + 1TB Intel 6000P NVME USB 3.2
Display(s) Phillips 32 32M1N5800A (4k144), LG 32" (4K60) | Gigabyte G32QC (2k165) | Phillips 328m6fjrmb (2K144)
Case Fractal Design R6
Audio Device(s) Logitech G560 | Corsair Void pro RGB |Blue Yeti mic
Power Supply Fractal Ion+ 2 860W (Platinum) (This thing is God-tier. Silent and TINY)
Mouse Logitech G Pro wireless + Steelseries Prisma XL
Keyboard Razer Huntsman TE ( Sexy white keycaps)
VR HMD Oculus Rift S + Quest 2
Software Windows 11 pro x64 (Yes, it's genuinely a good OS) OpenRGB - ditch the branded bloatware!
Benchmark Scores Nyooom.
No surprise if Dx12 unlock amd's performance. DX12 is literately Microsoft's Mantle by the way.
But I doubt that they can leave nVidia in dust

in draw calls at least, AMD have gone from a decently sized deficit and suddenly leapt ahead quite a lot. How that translates to real world performance, will need to be seen. In CPU intensive titles (RTS games) it could be a big thing.
 
Last edited:
Joined
Sep 6, 2013
Messages
3,216 (0.80/day)
Location
Athens, Greece
System Name 3 desktop systems: Gaming / Internet / HTPC
Processor Ryzen 5 5500 / Ryzen 5 4600G / FX 6300 (12 years latter got to see how bad Bulldozer is)
Motherboard MSI X470 Gaming Plus Max (1) / MSI X470 Gaming Plus Max (2) / Gigabyte GA-990XA-UD3
Cooling Νoctua U12S / Segotep T4 / Snowman M-T6
Memory 16GB G.Skill RIPJAWS 3600 / 16GB G.Skill Aegis 3200 / 16GB Kingston 2400MHz (DDR3)
Video Card(s) ASRock RX 6600 + GT 710 (PhysX)/ Vega 7 integrated / Radeon RX 580
Storage NVMes, ONLY NVMes/ NVMes, SATA Storage / NVMe boot(Clover), SATA storage
Display(s) Philips 43PUS8857/12 UHD TV (120Hz, HDR, FreeSync Premium) ---- 19'' HP monitor + BlitzWolf BW-V5
Case Sharkoon Rebel 12 / CoolerMaster Elite 361 / Xigmatek Midguard
Audio Device(s) onboard
Power Supply Chieftec 850W / Silver Power 400W / Sharkoon 650W
Mouse CoolerMaster Devastator III Plus / CoolerMaster Devastator / Logitech
Keyboard CoolerMaster Devastator III Plus / CoolerMaster Devastator / Logitech
Software Windows 10 / Windows 10&Windows 11 / Windows 10
3D Mark's API feature test proved that AMD's drivers under DX11 DO have problems with draw calls. Because of that AMD's hardware is really getting a huge boost with DX12, and even performs better than Nvidia hardware. That's the reason Nvidia is preparing to defent it's possition through GameWorks and other software libraries that will try to put a few breaks on AMD's GPUs.

That thing alone is enough to consider that AMD will get a nice boost from DX 12 and probably will come on top in performance. Will they leave Nvidia in the dust? That depends on how much successful will be GameWorks in the future, and Nvidia does have plenty of money to make it successful.

PS One more thing to consider is that Nvidia's architecture needs constant optimizations, while GCN is something you can leave on auto pilot. That's why, in my opinion, AMD's cards that perform equally or a little worst compared to Nvidia models in the beginning, 2-3 years latter perform much faster compared to the same Nvidia models. So, by the time Pascal comes out, AMD cards will look better compared to today.

They were the first ones to talk about it in their conferences, but then AMD beat them to actually having a GPU ready with it implemented.
AMD co develop HBM with Hynix. Nvidia never had their hands on HBM before AMD. This phrase reminds me about DX12 and how Nvidia had access to it before AMD, but still the poor little AMD manages to come out with Mantle before we knew about the existence of DX 12.

High Bandwidth Memory - Wikipedia, the free encyclopedia




Plus don't most game creators make games based on the lowest common denominator?
You mean PhysX and GameWorks?
 
Last edited:
Joined
Jan 2, 2015
Messages
1,099 (0.31/day)
Processor FX6350@4.2ghz-i54670k@4ghz
Video Card(s) HD7850-R9290
in draw calls at least, AMD have gone from a decently sized deficit and suddenly leapt ahead quite a lot. How that translates to real world performance, will need to be seen. In CPU intensive titles (RTS games) it could be a big thing.
your failing to grasp what async shaders do and what they mean.. what is suppose to run on the cpu will be and what is suppose to be run on the gpu will be.. microsoft already said one gcn performance optimization was over 30 percent for lighting effects.
some people have called gpu's stupid throughout history well here it is the gpu brains for everyone that understands the async highway
nvidia may try push harder with gameworks but they will fail to the future.
dont be afraid to watch the animated videos! you get to learn and they make it simple so your kids can learn like its the discovery channel!
 
Last edited:
Status
Not open for further replies.
Top