• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

Editorial NVIDIA's Weakness is AMD's Strength: It's Time to Regain Mid-Range Users

Joined
Oct 22, 2014
Messages
14,163 (3.82/day)
Location
Sunshine Coast
System Name H7 Flow 2024
Processor AMD 5800X3D
Motherboard Asus X570 Tough Gaming
Cooling Custom liquid
Memory 32 GB DDR4
Video Card(s) Intel ARC A750
Storage Crucial P5 Plus 2TB.
Display(s) AOC 24" Freesync 1m.s. 75Hz
Mouse Lenovo
Keyboard Eweadn Mechanical
Software W11 Pro 64 bit
I think you fail to understand the majority of users don't buy high end cards, but mid-range or low end cards. Which is why Intel is the leader in GPU sales.
Intel? :confused:
 
Joined
Jun 10, 2014
Messages
2,987 (0.78/day)
Processor AMD Ryzen 9 5900X ||| Intel Core i7-3930K
Motherboard ASUS ProArt B550-CREATOR ||| Asus P9X79 WS
Cooling Noctua NH-U14S ||| Be Quiet Pure Rock
Memory Crucial 2 x 16 GB 3200 MHz ||| Corsair 8 x 8 GB 1333 MHz
Video Card(s) MSI GTX 1060 3GB ||| MSI GTX 680 4GB
Storage Samsung 970 PRO 512 GB + 1 TB ||| Intel 545s 512 GB + 256 GB
Display(s) Asus ROG Swift PG278QR 27" ||| Eizo EV2416W 24"
Case Fractal Design Define 7 XL x 2
Audio Device(s) Cambridge Audio DacMagic Plus
Power Supply Seasonic Focus PX-850 x 2
Mouse Razer Abyssus
Keyboard CM Storm QuickFire XT
Software Ubuntu
and probably Navi will just be a number of tweaks.
+ an Nvidia Tensor competitor, likely.
Yes, certainly, there is a good chance of that.
But Navi is not going to be a redesign of the fundamental architecture of GCN.

Vega 56 would be worth better for your money, esp the flashing to 64 bios, overclocking and undervolting. These seem to have very good results as AMD was pretty much rushing those GPU's out without any proper testing about power consumption. The Vega arch on this procede is already maxed out. Anything above 1650Mhz and a full load applied is running towards 350 to 400W terrority. Almost twice as a 1080 and proberly not even 1/3rd performance more.
Wait a minute, you're arguing Vega is a good buy since you can flash the BIOS and overclock it?
You are talking about something which is very risky, and even when successful, the Pascal counterparts are still better. So what's the point? You are encouraging something which should be restricted to enthusiasts who do that as a hobby. This should never be a buying recommendation.

No mather who you flip it, Vega is an inferior choice vs. Pascal at last year's prices, and currently with Pascal on sale and Turing hitting the shelves, there is no reason to by Vega for gaming.

The refresh on a smaller node is good > it allows AMD to lower power consumption, push for higher clocks and hopefully produce cheaper chips. The smaller you make them the more fit on a silicon wafer. RTX is so damn expensive because those are frankly big dies and big dies take up alot of space on a wafer.
Turing is currently about twice as efficient per watt as Vega, even with a node shrink Vega will not be able to compete there. And don't forget that Nvidia have access to the same node as AMD.
Sill, the first generation of 7 nm node will not produce high volumes. Production with triple/quad patterning on DUV will be very slow and have issues with yields. A few weeks ago GloFo gave up 7 nm, not because it didn't work, but because it wasn't cost effective. It will take a while before we see wide adoption of 7 nm, volumes and costs probably eventually beat the current nodes, but it will take a long time.

The Polaris was a good mid-range card, and still is. Pubg does excellent at 75Hz/FPS lock at WQHD. In my opinion people dont need 144fps on a 60hz screen. Cap that and you can half your power bill easily. :) Something you dont hear people saying either.
When the first GCN cards was released, they competed well with Kepler, but Maxwell started to pull ahead the 2nd/3rd gen GCNs of the 300-series. Polaris (4th gen GCN) is barely different from it's predecessors, most of the improvement is a pure node shrink, and you can't call it good when it's on par with the previous Maxwell on an older node. RX 480/580 was never a better choice than GTX 1060, and the lower models are just low-end anyway.

Is there a new product coming... something, or AMD just goes idle for 12-18mo's maintaining with Polaris products as they are?
We don't know if there will be another refresh of Polaris, but Navi is still many months away.

But I would be worried to buy AMD cards so late in the product cycle, they have been known to drop driver support for cards that are still sold. Until their policy changes, I wouldn't buy anything but their latest generation.
 

W1zzard

Administrator
Staff member
Joined
May 14, 2004
Messages
27,933 (3.71/day)
Processor Ryzen 7 5700X
Memory 48 GB
Video Card(s) RTX 4080
Storage 2x HDD RAID 1, 3x M.2 NVMe
Display(s) 30" 2560x1600 + 19" 1280x1024
Software Windows 10 64-bit
Editorial news posts will now automatically have "Editorial" Thread prefix on the forums, like this thread
 
Joined
Dec 30, 2010
Messages
2,199 (0.43/day)
Yes, certainly, there is a good chance of that.


When the first GCN cards was released, they competed well with Kepler, but Maxwell started to pull ahead the 2nd/3rd gen GCNs of the 300-series. Polaris (4th gen GCN) is barely different from it's predecessors, most of the improvement is a pure node shrink, and you can't call it good when it's on par with the previous Maxwell on an older node. RX 480/580 was never a better choice than GTX 1060, and the lower models are just low-end anyway.

On Reddit, several people who went from a AMD based card back to nvidia, noted the instant color difference in games where textures on the green camp where a bit more blurry compared to AMD. Saying that the 1060 is a better choice is not really true if you care about image quality. It woud'nt suprise me if Nvidia is altering with Lod here and there to make the benchmarks procentual look better.

The image quality of AMD cards in general in games is still superior compared to Nvidia. One reason for me to stick with AMD.
 
Joined
Jun 10, 2014
Messages
2,987 (0.78/day)
Processor AMD Ryzen 9 5900X ||| Intel Core i7-3930K
Motherboard ASUS ProArt B550-CREATOR ||| Asus P9X79 WS
Cooling Noctua NH-U14S ||| Be Quiet Pure Rock
Memory Crucial 2 x 16 GB 3200 MHz ||| Corsair 8 x 8 GB 1333 MHz
Video Card(s) MSI GTX 1060 3GB ||| MSI GTX 680 4GB
Storage Samsung 970 PRO 512 GB + 1 TB ||| Intel 545s 512 GB + 256 GB
Display(s) Asus ROG Swift PG278QR 27" ||| Eizo EV2416W 24"
Case Fractal Design Define 7 XL x 2
Audio Device(s) Cambridge Audio DacMagic Plus
Power Supply Seasonic Focus PX-850 x 2
Mouse Razer Abyssus
Keyboard CM Storm QuickFire XT
Software Ubuntu
On Reddit, several people who went from a AMD based card back to nvidia, noted the instant color difference in games where textures on the green camp where a bit more blurry compared to AMD. Saying that the 1060 is a better choice is not really true if you care about image quality. It woud'nt suprise me if Nvidia is altering with Lod here and there to make the benchmarks procentual look better.

The image quality of AMD cards in general in games is still superior compared to Nvidia. One reason for me to stick with AMD.
Hmmm, "random" people on reddit…

Over 10 years ago, there used to be large variations in render quality, both texture filtering and AA, between different generations. But since Fermi and HD 5000 I haven't seen any substantial differences.

There are benchmarks like 3D Mark where both are known to cheat by overriding the shader programs. But as of GCN vs. Pascal, there is no general difference in image quality. You might be able to find an edge case in a special AA mode or a game where the driver "optimizes"(screw up) a shader program. But both Direct3D, OpenGL and Vulkan are designed with requirements such that variations in the rendering output should be minimal.
 

hat

Enthusiast
Joined
Nov 20, 2006
Messages
21,747 (3.29/day)
Location
Ohio
System Name Starlifter :: Dragonfly
Processor i7 2600k 4.4GHz :: i5 10400
Motherboard ASUS P8P67 Pro :: ASUS Prime H570-Plus
Cooling Cryorig M9 :: Stock
Memory 4x4GB DDR3 2133 :: 2x8GB DDR4 2400
Video Card(s) PNY GTX1070 :: Integrated UHD 630
Storage Crucial MX500 1TB, 2x1TB Seagate RAID 0 :: Mushkin Enhanced 60GB SSD, 3x4TB Seagate HDD RAID5
Display(s) Onn 165hz 1080p :: Acer 1080p
Case Antec SOHO 1030B :: Old White Full Tower
Audio Device(s) Creative X-Fi Titanium Fatal1ty Pro - Bose Companion 2 Series III :: None
Power Supply FSP Hydro GE 550w :: EVGA Supernova 550
Software Windows 10 Pro - Plex Server on Dragonfly
Benchmark Scores >9000
Editorial news posts will now automatically have "Editorial" Thread prefix on the forums, like this thread

This post needs a million likes.
 
Joined
Sep 26, 2018
Messages
47 (0.02/day)
Ray tracing looks pretty, but I'm not sure it's all that exciting to gamers. But since ray tracing requires more extensive and flexible arithmetic capabilities than conventional graphics operations, I have wondered whether NVIDIA's new cards with ray tracing capabilities would also be better at GPU computing. If so, I hope that AMD does find a way to eventually incorporate this capability into their products as well.
 
Joined
Feb 3, 2017
Messages
3,810 (1.33/day)
Processor Ryzen 7800X3D
Motherboard ROG STRIX B650E-F GAMING WIFI
Memory 2x16GB G.Skill Flare X5 DDR5-6000 CL36 (F5-6000J3636F16GX2-FX5)
Video Card(s) INNO3D GeForce RTX™ 4070 Ti SUPER TWIN X2
Storage 2TB Samsung 980 PRO, 4TB WD Black SN850X
Display(s) 42" LG C2 OLED, 27" ASUS PG279Q
Case Thermaltake Core P5
Power Supply Fractal Design Ion+ Platinum 760W
Mouse Corsair Dark Core RGB Pro SE
Keyboard Corsair K100 RGB
VR HMD HTC Vive Cosmos
Ray tracing looks pretty, but I'm not sure it's all that exciting to gamers. But since ray tracing requires more extensive and flexible arithmetic capabilities than conventional graphics operations, I have wondered whether NVIDIA's new cards with ray tracing capabilities would also be better at GPU computing. If so, I hope that AMD does find a way to eventually incorporate this capability into their products as well.
Ray Tracing capabilities will not make GPUs better in general computing.
Turing is better at computing than Pascal but not because of Ray Tracing or Tensor cores but because it has a slightly evolved microarchitecture. Turing is close to Volta in how it works in compute.

There are enough signs that Ray Tracing will take off in some form or another - DXR, Vulkan RT extensions, Optix/ProRender and the technology is likely to bleed over from professional sector where it already has taken off. The question is, when and how and whether Turing's implementation will be relevant.
 
Joined
Mar 10, 2014
Messages
1,793 (0.46/day)
Ray Tracing capabilities will not make GPUs better in general computing.
Turing is better at computing than Pascal but not because of Ray Tracing or Tensor cores but because it has a slightly evolved microarchitecture. Turing is close to Volta in how it works in compute.

There are enough signs that Ray Tracing will take off in some form or another - DXR, Vulkan RT extensions, Optix/ProRender and the technology is likely to bleed over from professional sector where it already has taken off. The question is, when and how and whether Turing's implementation will be relevant.

Would be super cool to get some redux games with RT, where it could shine. I'm thinking here games like Mirrors Edge. Just imagine that with RT reflections on buildings.
 
Joined
Jun 28, 2016
Messages
3,595 (1.16/day)
Ray Tracing capabilities will not make GPUs better in general computing.
Terminology mix-up.
Ray tracing is just another computing task and it definitely can be put under the "general processing" umbrella.
Hence, adding RT-specific ASIC to the chip will (by definition) improve computing potential in a very small class of problems.
However, as we learn how to use RT cores in other problems, they will become more and more useful (and the GPGPU "gain" will increase).

Ray Tracing is primarily a specific case of collision detection problem, so it's not that difficult to imagine a problem, where this ASIC could be used.

Turing is better at computing than Pascal but not because of Ray Tracing or Tensor cores but because it has a slightly evolved microarchitecture. Turing is close to Volta in how it works in compute.
Most of the things written above apply to Tensor cores. They're good at matrix operations and they will speed up other tasks. They already do.
There are enough signs that Ray Tracing will take off in some form or another - DXR, Vulkan RT extensions, Optix/ProRender and the technology is likely to bleed over from professional sector where it already has taken off. The question is, when and how and whether Turing's implementation will be relevant.
2080Ti's RT cores speed up Ray Tracing around 6x (alleged) compared to what this chip could do on CUDA alone. That means they're few dozen times more effective than the CUDA cores that they could be replaced with (measured by area). That's fairly significant.
If Ray Tracing catches on as a feature, hardware accelerator's performance will be so far ahead that it will become a standard.
And even if RT remains niche (or only available in high-end GPUs), someone will soon learn how to use this hardware to boost physics or something else. That's why we shouldn't rule out RT cores in cheaper GPUs - even if they would be too slow to cover the "flagship" use case, aka RTRT in gaming.

Also, I believe you're thinking about RTRT when saying RT, right? ;-)
 
Joined
Feb 3, 2017
Messages
3,810 (1.33/day)
Processor Ryzen 7800X3D
Motherboard ROG STRIX B650E-F GAMING WIFI
Memory 2x16GB G.Skill Flare X5 DDR5-6000 CL36 (F5-6000J3636F16GX2-FX5)
Video Card(s) INNO3D GeForce RTX™ 4070 Ti SUPER TWIN X2
Storage 2TB Samsung 980 PRO, 4TB WD Black SN850X
Display(s) 42" LG C2 OLED, 27" ASUS PG279Q
Case Thermaltake Core P5
Power Supply Fractal Design Ion+ Platinum 760W
Mouse Corsair Dark Core RGB Pro SE
Keyboard Corsair K100 RGB
VR HMD HTC Vive Cosmos
I suppose it's not that important to be very specific with terminology in thread like this, is it? If I remember correctly RT Cores are technically additional units in every SM with hardware BVH traversal capability. And like you said, Tensor cores do certain types of matrix operations in hardware. For an average gamer or hardware enthusiast (like me) they are still RT and Tensor cores as opposed to CUDA/GCN cores that are GPGPU :D

RT is really new so there are no real applications for it in consumer space. Same for Tensor (although this has very clear uses in HPC space). I would not count on these being very useful beyond their intended use case.

You are right, I did mean RTRT. Although strictly speaking RTX does not necessarily have to mean RTRT. A lot of applications for Quadro are not real-time yet still accelerated on the same units.
 
Top