• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

AMD’s new RDNA GPU Architecture has Been Exclusively Designed for Gamers

FordGT90Concept

"I go fast!1!11!1!"
Joined
Oct 13, 2008
Messages
26,259 (4.46/day)
Location
IA, USA
System Name BY-2021
Processor AMD Ryzen 7 5800X (65w eco profile)
Motherboard MSI B550 Gaming Plus
Cooling Scythe Mugen (rev 5)
Memory 2 x Kingston HyperX DDR4-3200 32 GiB
Video Card(s) AMD Radeon RX 7900 XT
Storage Samsung 980 Pro, Seagate Exos X20 TB 7200 RPM
Display(s) Nixeus NX-EDG274K (3840x2160@144 DP) + Samsung SyncMaster 906BW (1440x900@60 HDMI-DVI)
Case Coolermaster HAF 932 w/ USB 3.0 5.25" bay + USB 3.2 (A+C) 3.5" bay
Audio Device(s) Realtek ALC1150, Micca OriGen+
Power Supply Enermax Platimax 850w
Mouse Nixeus REVEL-X
Keyboard Tesoro Excalibur
Software Windows 10 Home 64-bit
Benchmark Scores Faster than the tortoise; slower than the hare.
GPGPU physics isn't a memory problem, it's an API problem. There's really only two popular physics API: PhysX (which only GPGPUs on NVIDIA hardware) and Havok (never finished GPGPU implementation). There's no all-inclusive GPGPU API which means you're making costly code paths for specific hardware which is a practice that died out in the 1990s. Even if Microsoft debuts DirectPhysics (only GPGPU agnostic solution on the horizon), it won't see broad adaption because physics code made for Windows likely won't work on Mac OS X, Linux, nor PlayStation so 1990s comes calling again.

GPGPU physics just isn't worth the headaches for 99.98% of developers out there; it's a novelty and not a strong foundation for game design.


Oh, and CPU-GPU bandwidth definitely isn't a sticking point for it. All the CPU communicates is the inputs and the outputs: you're basically just throwing vectors back and forward for objects--trivial.
 
Joined
Jan 8, 2017
Messages
9,437 (3.28/day)
System Name Good enough
Processor AMD Ryzen R9 7900 - Alphacool Eisblock XPX Aurora Edge
Motherboard ASRock B650 Pro RS
Cooling 2x 360mm NexXxoS ST30 X-Flow, 1x 360mm NexXxoS ST30, 1x 240mm NexXxoS ST30
Memory 32GB - FURY Beast RGB 5600 Mhz
Video Card(s) Sapphire RX 7900 XT - Alphacool Eisblock Aurora
Storage 1x Kingston KC3000 1TB 1x Kingston A2000 1TB, 1x Samsung 850 EVO 250GB , 1x Samsung 860 EVO 500GB
Display(s) LG UltraGear 32GN650-B + 4K Samsung TV
Case Phanteks NV7
Power Supply GPS-750C
Moving even as little as 100mb takes about 10ms across a PCI-e 3.0 connection, let's say you want 144hz since that's all the rage these days. One frame takes 6.9 ms, but your memory transfer alone for GPGPU purposes takes 10ms , ouch. Now you may not need to move 100mb all the time, but you can easily see how this doesn't quite work very well if you want high framerates, irrespective of how fast the GPU itself is. And let's not even bring in the matter of latency.

This is totally a memory transfer problem and because of that concurrent graphics and GPGPU remains problematic. Take a guess as to why most have given up on GPGPU implementations, it can't all be because of lack of interest, there are technical limitations that prohibit this. Even PhysX implementations usually pertain to small scale effects here and there which can be run on a CPU just fine a lot of the time despite the obvious computational advantage that they have on GPUs.
 
Last edited:
Joined
Sep 17, 2014
Messages
22,452 (6.03/day)
Location
The Washing Machine
Processor 7800X3D
Motherboard MSI MAG Mortar b650m wifi
Cooling Thermalright Peerless Assassin
Memory 32GB Corsair Vengeance 30CL6000
Video Card(s) ASRock RX7900XT Phantom Gaming
Storage Lexar NM790 4TB + Samsung 850 EVO 1TB + Samsung 980 1TB + Crucial BX100 250GB
Display(s) Gigabyte G34QWC (3440x1440)
Case Lian Li A3 mATX White
Audio Device(s) Harman Kardon AVR137 + 2.1
Power Supply EVGA Supernova G2 750W
Mouse Steelseries Aerox 5
Keyboard Lenovo Thinkpad Trackpoint II
Software W11 IoT Enterprise LTSC
Benchmark Scores Over 9000
absolutely.66 fps vs 60 fps is noticeable.The more intensive the scene (e.g. fighting in assassin's creed odyssey) the more pronounced the diffference is.In regular world exploraton you can see it but it doesn't really have that much impact.
anyway,it doesn't matter if you can see it or not,card A is slower,has no rtx acceleration,runs hotter and louder at the same price as card B = that means you're not getting a product of the same quality.


but only with amd you're getting 16gb vram and pci-e 4.0, up to 69% faster,much wow





boy,for those who mocked the rtx demo,will your face be red now that the most hyped up gpu series since the launch of vega presented you whatever this is




I get why you'd make pci-e 4.0 for x570/ryzen 3 a big thing,the connectivity capabilities of the chipset are just insane.But make your mid-range card all about pci-e 4.0? Hvae they forgotten what actually sells cards completely?

Oh come on. They don't have an RT gimmick to show off, you need to do something to package 2-3 year old performance and make it stick, right.

Nvidia isn't doing a whole lot different since the 1080ti, they just market it better. All GPUs since Pascal are baby steps, let's just call it what it is, and no, I don't consider a 1200 dollar GPU a meaningful step forward, that's way beyond comfort zone for this gaming segment. Think back - Pascal took its sweet time to get replaced, and when it did, the only progress in midrange has been the RTX 2060. The rest might just as well have been gone and nobody would game any different - well, bar that 0.9% that buys the 2080ti.

All things considered I think both companies do a fine job hiding that loooong wait for 7nm. Its just a shame AMD didn't get to do more with Navi than this IMO, they had their sweet time by now as well. They are really both dragging it out because they know there is just a limited gain left on the horizon.

Moving even as little as 100mb takes about 10ms across a PCI-e 3.0 connection, let's say you want 144hz since that's all the rage these days. One frame takes 6.9 ms, but your memory transfer alone for GPGPU purposes takes 10ms , ouch. Now you may not need to move 100mb all the time, but you can easily see how this doesn't quite work very well if you want high framerates, irrespective of how fast the GPU itself is. And let's not even bring in the matter of latency.

This is totally a memory transfer problem and because of that concurrent graphics and GPGPU remains problematic. Take a guess as to why most have given up on GPGPU implementations, it can't all be because of lack of interest, there are technical limitations that prohibit this. Even PhysX implementations usually pertain to small scale effects here and there which can be run on a CPU just fine a lot of the time despite the obvious computational advantage that they have on GPUs.

Interesting, never knew that was such a major limitation for physics.
 

FordGT90Concept

"I go fast!1!11!1!"
Joined
Oct 13, 2008
Messages
26,259 (4.46/day)
Location
IA, USA
System Name BY-2021
Processor AMD Ryzen 7 5800X (65w eco profile)
Motherboard MSI B550 Gaming Plus
Cooling Scythe Mugen (rev 5)
Memory 2 x Kingston HyperX DDR4-3200 32 GiB
Video Card(s) AMD Radeon RX 7900 XT
Storage Samsung 980 Pro, Seagate Exos X20 TB 7200 RPM
Display(s) Nixeus NX-EDG274K (3840x2160@144 DP) + Samsung SyncMaster 906BW (1440x900@60 HDMI-DVI)
Case Coolermaster HAF 932 w/ USB 3.0 5.25" bay + USB 3.2 (A+C) 3.5" bay
Audio Device(s) Realtek ALC1150, Micca OriGen+
Power Supply Enermax Platimax 850w
Mouse Nixeus REVEL-X
Keyboard Tesoro Excalibur
Software Windows 10 Home 64-bit
Benchmark Scores Faster than the tortoise; slower than the hare.
Moving even as little as 100mb takes about 10ms across a PCI-e 3.0 connection, let's say you want 144hz since that's all the rage these days. One frame takes 6.9 ms, but your memory transfer alone for GPGPU purposes takes 10ms , ouch. Now you may not need to move 100mb all the time, but you can easily see how this doesn't quite work very well if you want high framerates, irrespective of how fast the GPU itself is. And let's not even bring in the matter of latency.

This is totally a memory transfer problem and because of that concurrent graphics and GPGPU remains problematic. Take a guess as to why most have given up on GPGPU implementations, it can't all be because of lack of interest, there are technical limitations that prohibit this. Even PhysX implementations usually pertain to small scale effects here and there which can be run on a CPU just fine a lot of the time despite the obvious computational advantage that they have on GPUs.
If my math doesn't suck, PCIE 3.0 x16 is 15.75 GB/s which translates to 15.75 MB/ms so you're theoretical example of 100 MB would take 6.35 ms. Only reason why you'd even consider doing that is if you were recording at 144 Hz. In which case, the GPU itself is using its encoding ASIC to cut the bandwidth way down...amount depends on resolution and color depth. The only thing that can get close to saturating that is 4K but...good luck getting 144 fps in the first place. The encoder naturally has high compression rate when duplicate frames are injected so, unless you have a lot of movement and are getting 144+ fps, it's not going to be a problem.

CPUs request very little from GPUs so that upstream used for saving the video stream is mostly unused anyway. CPUs send a lot of data to the GPU.

Again, it's not a bandwidth problem, it's a game design problem. If you have haphazard implementations of physics that are actually more than cosmetic, you're going to get a haphazard game experience from it too. Unless the physics are instrumental to the game design (see Havok and Red Faction: Guerilla) developers always smash the "easy" button.

Interesting, never knew that was such a major limitation for physics.
It's not. Realistic physics just aren't a priority for most developers. Even in cases where it is (Make Sail comes to mind--uses PhysX), they do it on CPU for compatibility sake. There is no 100% hardware and software agnostic physics library. CPU PhysX is attractive because Unreal Engine 4 and Unity both use it and are multi-platform friendly. Any decision to use GPU acceleration for PhysX means the game is broken for all Intel (Make Sail will run on decent integrated graphics), AMD, and ARM-based players.


...this is getting so far off topic.
 
Last edited:
Joined
Jan 8, 2017
Messages
9,437 (3.28/day)
System Name Good enough
Processor AMD Ryzen R9 7900 - Alphacool Eisblock XPX Aurora Edge
Motherboard ASRock B650 Pro RS
Cooling 2x 360mm NexXxoS ST30 X-Flow, 1x 360mm NexXxoS ST30, 1x 240mm NexXxoS ST30
Memory 32GB - FURY Beast RGB 5600 Mhz
Video Card(s) Sapphire RX 7900 XT - Alphacool Eisblock Aurora
Storage 1x Kingston KC3000 1TB 1x Kingston A2000 1TB, 1x Samsung 850 EVO 250GB , 1x Samsung 860 EVO 500GB
Display(s) LG UltraGear 32GN650-B + 4K Samsung TV
Case Phanteks NV7
Power Supply GPS-750C
If my math doesn't suck, PCIE 3.0 x16 is 15.75 GB/s which translates to 15.75 MB/ms so you're theoretical example of 100 MB would take 6.35 ms. Only reason why you'd even consider doing that is if you were recording at 144 Hz. In which case, the GPU itself is using its encoding ASIC to cut the bandwidth way down...amount depends on resolution and color depth. The only thing that can get close to saturating that is 4K but...good luck getting 144 fps in the first place. The encoder naturally has high compression rate when duplicate frames are injected so, unless you have a lot of movement and are getting 144+ fps, it's not going to be a problem.

In practice you don't see more than 12-10 GB/s of bandwidth, I am being realistic here. Also I have no idea what recording has to do with any of this. You want compute on the GPU, you need the bandwidth for it, it's a basic fact and when you try to intertwine it with graphics you encounter issues. No matter what you do, you can't avoid this, games need to be interactive systems and data has to be updated with each frame.

You got me curious though, if PCI-e bandwidth is utterly irrelevant for stuff such as GPU accelerated physics how did that benchmark AMD showed worked ? Does it just move worthless data back and forth tie that to framerate and report bogus results ?
 
Last edited:

FordGT90Concept

"I go fast!1!11!1!"
Joined
Oct 13, 2008
Messages
26,259 (4.46/day)
Location
IA, USA
System Name BY-2021
Processor AMD Ryzen 7 5800X (65w eco profile)
Motherboard MSI B550 Gaming Plus
Cooling Scythe Mugen (rev 5)
Memory 2 x Kingston HyperX DDR4-3200 32 GiB
Video Card(s) AMD Radeon RX 7900 XT
Storage Samsung 980 Pro, Seagate Exos X20 TB 7200 RPM
Display(s) Nixeus NX-EDG274K (3840x2160@144 DP) + Samsung SyncMaster 906BW (1440x900@60 HDMI-DVI)
Case Coolermaster HAF 932 w/ USB 3.0 5.25" bay + USB 3.2 (A+C) 3.5" bay
Audio Device(s) Realtek ALC1150, Micca OriGen+
Power Supply Enermax Platimax 850w
Mouse Nixeus REVEL-X
Keyboard Tesoro Excalibur
Software Windows 10 Home 64-bit
Benchmark Scores Faster than the tortoise; slower than the hare.
Also I have no idea what recording has to do with any of this.
Because it's the only constant thing that uses a lot of bandwidth that can't be precached.

You want compute on the GPU, you need the bandwidth for it, it's a basic fact and when you try to intertwine it with graphics you encounter issues.
Physics is a high compute, low bandwidth workload as far as the CPU-GPU link is concerned. They were doing heavier physics simulations back on PCIE 1.0 than they're doing now in games.

Find a citation of one developer that said PCIE bandwidth is an issue for their game. Most developers bitch about console bandwidth being so restrictive, not PC, and in those instances, it's because they were doing something wrong (e.g. loading assets into the wrong memory which choked it).

Does it just move worthless data back and forth tie that to framerate and report bogus results ?
Likely. Benchmarks tend to do frivolous things in repeatable ways.

Many people have tested games by blocking PCI lanes and it just doesn't make much difference unless you get under x8 lanes. This has been true for every generation since the first. As graphics cards get faster, so does PCIE. It's always been a non-issue...unless two GPUs are trying to talk to each other.


To be perfectly clear: GPU physics is bad because it robs frames from the game, not because of bandwidth but because compute resources that would be used for graphics are being used for physics. Most players can't tell fake physics from realistic physics but they can tell 30 fps from 15 fps. That's why GPU physics have been relegated to optional cosmetics: turning it off won't break the game.
 
Last edited:
Joined
Mar 10, 2010
Messages
11,878 (2.21/day)
Location
Manchester uk
System Name RyzenGtEvo/ Asus strix scar II
Processor Amd R5 5900X/ Intel 8750H
Motherboard Crosshair hero8 impact/Asus
Cooling 360EK extreme rad+ 360$EK slim all push, cpu ek suprim Gpu full cover all EK
Memory Corsair Vengeance Rgb pro 3600cas14 16Gb in four sticks./16Gb/16GB
Video Card(s) Powercolour RX7900XT Reference/Rtx 2060
Storage Silicon power 2TB nvme/8Tb external/1Tb samsung Evo nvme 2Tb sata ssd/1Tb nvme
Display(s) Samsung UAE28"850R 4k freesync.dell shiter
Case Lianli 011 dynamic/strix scar2
Audio Device(s) Xfi creative 7.1 on board ,Yamaha dts av setup, corsair void pro headset
Power Supply corsair 1200Hxi/Asus stock
Mouse Roccat Kova/ Logitech G wireless
Keyboard Roccat Aimo 120
VR HMD Oculus rift
Software Win 10 Pro
Benchmark Scores 8726 vega 3dmark timespy/ laptop Timespy 6506
In practice you don't see more than 12-10 GB/s of bandwidth, I am being realistic here. Also I have no idea what recording has to do with any of this. You want compute on the GPU, you need the bandwidth for it, it's a basic fact and when you try to intertwine it with graphics you encounter issues. No matter what you do, you can't avoid this, games need to be interactive systems and data has to be updated with each frame.

You got me curious though, if PCI-e bandwidth is utterly irrelevant for stuff such as GPU accelerated physics how did that benchmark AMD showed worked ? Does it just move worthless data back and forth tie that to framerate and report bogus results ?
Imho, both AMD and Intel intend to utilize the GPU and the Cpu coherently together, in order to make ray tracing and other edge cases like AI FASTER, both these use cases can be helped by lower latency, high bandwidth interconnects.
It's easy to get caught up on the way things are, it is important to note where things are going.
Sony's main tech guy said the next generation is about a programmable era? Not a graphical era, despite Rays being in the news.
 
Last edited:
Joined
Mar 10, 2015
Messages
3,984 (1.12/day)
System Name Wut?
Processor 3900X
Motherboard ASRock Taichi X570
Cooling Water
Memory 32GB GSkill CL16 3600mhz
Video Card(s) Vega 56
Storage 2 x AData XPG 8200 Pro 1TB
Display(s) 3440 x 1440
Case Thermaltake Tower 900
Power Supply Seasonic Prime Ultra Platinum
Are you making a statement or asking a question for an answer no one outside of AMD could know? Besides that no comment, I'll wait for benches before offering my 2 cents.

Did I leave off the question mark? I thought I saw a slide where they were touting 2070 perf for $499. So I asked. Statements don't usually end in question marks.
 
Joined
Apr 18, 2019
Messages
935 (0.46/day)
Location
The New England region of the United States
System Name Gaming Rig
Processor Ryzen 7 3800X
Motherboard Gigabyte X570 Aurus Pro Wifi
Cooling Noctua NH-D15 chromax.black
Memory 32GB(2x16GB) Patriot Viper DDR4-3200C16
Video Card(s) EVGA RTX 3060 Ti
Storage Samsung 970 EVO Plus 1TB (Boot/OS)|Hynix Platinum P41 2TB (Games)
Display(s) Gigabyte G27F
Case Corsair Graphite 600T w/mesh side
Audio Device(s) Logitech Z625 2.1 | cheapo gaming headset when mic is needed
Power Supply Corsair HX850i
Mouse Redragon M808-KS Storm Pro (Great Value)
Keyboard Redragon K512 Shiva replaced a Corsair K70 Lux - Blue on Black
VR HMD Nope
Software Windows 11 Pro x64
Benchmark Scores Nope
Did I leave off the question mark? I thought I saw a slide where they were touting 2070 perf for $499. So I asked. Statements don't usually end in question marks.

Why would they claim claim 2070 performance for $499 when you can already get a 2070 for $499? Maybe 2080 performance for $499 but AMD releasing something that actually has 2080 level performance across the board seems like a pipe dream at this point. It also would make zero sense if they are claiming that Vega VII will still be the top of their stack.
 
Joined
Nov 21, 2010
Messages
2,353 (0.46/day)
Location
Right where I want to be
System Name Miami
Processor Ryzen 3800X
Motherboard Asus Crosshair VII Formula
Cooling Ek Velocity/ 2x 280mm Radiators/ Alphacool fullcover
Memory F4-3600C16Q-32GTZNC
Video Card(s) XFX 6900 XT Speedster 0
Storage 1TB WD M.2 SSD/ 2TB WD SN750/ 4TB WD Black HDD
Display(s) DELL AW3420DW / HP ZR24w
Case Lian Li O11 Dynamic XL
Audio Device(s) EVGA Nu Audio
Power Supply Seasonic Prime Gold 1000W+750W
Mouse Corsair Scimitar/Glorious Model O-
Keyboard Corsair K95 Platinum
Software Windows 10 Pro
Did I leave off the question mark? I thought I saw a slide where they were touting 2070 perf for $499. So I asked. Statements don't usually end in question marks.

I know statements don't end with a question mark. As question it made no sense and there was no way to answer it because you thought you saw something you didn't.
 
Joined
Feb 19, 2019
Messages
324 (0.15/day)
Is it me or Dr. Lisa su loves to have fun with the new products?
In CES she showed us 9900K vs 8C Ryzen 3000 and in Computex she presented those 3700X/3800X CPU's first that we all expected and then she surprised every one with Ryzen 9 12C.
In similar fashion she showed us the "Look how tiny it is" RX 5700 and showed a demo vs RTX 2070, and then in E3 after presenting the RX 5700 she will surprise us with RX 5900[?] to compete with RTX 2080Ti?
 
Joined
Nov 21, 2010
Messages
2,353 (0.46/day)
Location
Right where I want to be
System Name Miami
Processor Ryzen 3800X
Motherboard Asus Crosshair VII Formula
Cooling Ek Velocity/ 2x 280mm Radiators/ Alphacool fullcover
Memory F4-3600C16Q-32GTZNC
Video Card(s) XFX 6900 XT Speedster 0
Storage 1TB WD M.2 SSD/ 2TB WD SN750/ 4TB WD Black HDD
Display(s) DELL AW3420DW / HP ZR24w
Case Lian Li O11 Dynamic XL
Audio Device(s) EVGA Nu Audio
Power Supply Seasonic Prime Gold 1000W+750W
Mouse Corsair Scimitar/Glorious Model O-
Keyboard Corsair K95 Platinum
Software Windows 10 Pro
absolutely.66 fps vs 60 fps is noticeable.The more intensive the scene (e.g. fighting in assassin's creed odyssey) the more pronounced the diffference is.In regular world exploraton you can see it but it doesn't really have that much impact.
I don't agree with that but don't feel like arguing that point
anyway,it doesn't matter if you can see it or not,card A is slower,has no rtx acceleration,runs hotter and louder at the same price as card B = that means you're not getting a product of the same quality.

I don't how you guys can keep dancing around card A bring faster and has RTX. Card A is faster than card B with RTX off. Card A is slower than Card B with RTX on. So you can't sit there and bring up RTX whenever it is said card A is faster than card B and the response is "well, not by much" because when RTX is on it it is slower.
 
Joined
Aug 6, 2017
Messages
7,412 (2.77/day)
Location
Poland
System Name Purple rain
Processor 10.5 thousand 4.2G 1.1v
Motherboard Zee 490 Aorus Elite
Cooling Noctua D15S
Memory 16GB 4133 CL16-16-16-31 Viper Steel
Video Card(s) RTX 2070 Super Gaming X Trio
Storage SU900 128,8200Pro 1TB,850 Pro 512+256+256,860 Evo 500,XPG950 480, Skyhawk 2TB
Display(s) Acer XB241YU+Dell S2716DG
Case P600S Silent w. Alpenfohn wing boost 3 ARGBT+ fans
Audio Device(s) K612 Pro w. FiiO E10k DAC,W830BT wireless
Power Supply Superflower Leadex Gold 850W
Mouse G903 lightspeed+powerplay,G403 wireless + Steelseries DeX + Roccat rest
Keyboard HyperX Alloy SilverSpeed (w.HyperX wrist rest),Razer Deathstalker
Software Windows 10
Benchmark Scores A LOT
Last edited:
Joined
Mar 10, 2015
Messages
3,984 (1.12/day)
System Name Wut?
Processor 3900X
Motherboard ASRock Taichi X570
Cooling Water
Memory 32GB GSkill CL16 3600mhz
Video Card(s) Vega 56
Storage 2 x AData XPG 8200 Pro 1TB
Display(s) 3440 x 1440
Case Thermaltake Tower 900
Power Supply Seasonic Prime Ultra Platinum
Why would they claim claim 2070 performance for $499 when you can already get a 2070 for $499? Maybe 2080 performance for $499 but AMD releasing something that actually has 2080 level performance across the board seems like a pipe dream at this point. It also would make zero sense if they are claiming that Vega VII will still be the top of their stack.

How about because they already have done this once before? You could get 2080 performance for $699 and they went ahead and launched a gpu with less performance for $699.

I know statements don't end with a question mark. As question it made no sense and there was no way to answer it because you thought you saw something you didn't.

Sounds like this could have been could a good answer: 'No, they didn't say that anywhere. You likely read someone's opinion in an article.' The last half isn't even really needed. Yeah, I think that would have perfectly answered the question.
 
Joined
Feb 8, 2017
Messages
228 (0.08/day)
Topic title needs a slight fix

AMDs new RDNA arch has been exclusively renamed for gamers



So far all that console dominance has resulted in zero progress on their GPU side. Just more chips at the same performance and near similar price. And they try to sell you that Strange Brigade performance as if it applies everywhere else while it does not.

Cant say Im feeling the gamer love
OMG the same CRAP all over again. Nvidia has had the SAME architecture for over 10 years now buddy boy. I mean their latest Turing is essentially the same architecture from their GTX 200 days.

The whole old and tiresome and fake and propagandist fake news that AMD's is using the same architecture and somehow Nvidia magically has new architecture every time is the most absurd propaganda I've ever seen on the internet. Fact of the matter is that Turing literally has roots and bottom down is the same architecture as their GTX 200 architecture from over 10 years ago!

IF anything AMD has changed architecture a lot more times and even their GCN architecture has seen major changed with each revision. GCN 1 to GCN 4 is virtually incomparable, even Vega was a massive redesign on GCN 4 which was already massively different from GCN 3.

What's with this fake news propaganda that somehow everything AMD does in the GPU space is GCN and is always the same, while Nvidia the saviors of the universe, the angels of heaven always magically have a new architecture every time brought from the clouds above on unicorns? Complete fake news propaganda! Its worse fake news like the decades old fake news of "AMD drivers are bad".
 
Joined
Aug 6, 2017
Messages
7,412 (2.77/day)
Location
Poland
System Name Purple rain
Processor 10.5 thousand 4.2G 1.1v
Motherboard Zee 490 Aorus Elite
Cooling Noctua D15S
Memory 16GB 4133 CL16-16-16-31 Viper Steel
Video Card(s) RTX 2070 Super Gaming X Trio
Storage SU900 128,8200Pro 1TB,850 Pro 512+256+256,860 Evo 500,XPG950 480, Skyhawk 2TB
Display(s) Acer XB241YU+Dell S2716DG
Case P600S Silent w. Alpenfohn wing boost 3 ARGBT+ fans
Audio Device(s) K612 Pro w. FiiO E10k DAC,W830BT wireless
Power Supply Superflower Leadex Gold 850W
Mouse G903 lightspeed+powerplay,G403 wireless + Steelseries DeX + Roccat rest
Keyboard HyperX Alloy SilverSpeed (w.HyperX wrist rest),Razer Deathstalker
Software Windows 10
Benchmark Scores A LOT
OMG the same CRAP all over again. Nvidia has had the SAME architecture for over 10 years now buddy boy. I mean their latest Turing is essentially the same architecture from their GTX 200 days.

The whole old and tiresome and fake and propagandist fake news that AMD's is using the same architecture and somehow Nvidia magically has new architecture every time is the most absurd propaganda I've ever seen on the internet. Fact of the matter is that Turing literally has roots and bottom down is the same architecture as their GTX 200 architecture from over 10 years ago!

IF anything AMD has changed architecture a lot more times and even their GCN architecture has seen major changed with each revision. GCN 1 to GCN 4 is virtually incomparable, even Vega was a massive redesign on GCN 4 which was already massively different from GCN 3.

What's with this fake news propaganda that somehow everything AMD does in the GPU space is GCN and is always the same, while Nvidia the saviors of the universe, the angels of heaven always magically have a new architecture every time brought from the clouds above on unicorns? Complete fake news propaganda! Its worse fake news like the decades old fake news of "AMD drivers are bad".
oh dis gon b gud ;)
 
Joined
Oct 2, 2015
Messages
3,144 (0.94/day)
Location
Argentina
System Name Ciel / Akane
Processor AMD Ryzen R5 5600X / Intel Core i3 12100F
Motherboard Asus Tuf Gaming B550 Plus / Biostar H610MHP
Cooling ID-Cooling 224-XT Basic / Stock
Memory 2x 16GB Kingston Fury 3600MHz / 2x 8GB Patriot 3200MHz
Video Card(s) Gainward Ghost RTX 3060 Ti / Dell GTX 1660 SUPER
Storage NVMe Kingston KC3000 2TB + NVMe Toshiba KBG40ZNT256G + HDD WD 4TB / NVMe WD Blue SN550 512GB
Display(s) AOC Q27G3XMN / Samsung S22F350
Case Cougar MX410 Mesh-G / Generic
Audio Device(s) Kingston HyperX Cloud Stinger Core 7.1 Wireless PC
Power Supply Aerocool KCAS-500W / Gigabyte P450B
Mouse EVGA X15 / Logitech G203
Keyboard VSG Alnilam / Dell
Software Windows 11
nvlink 2.0 on turing can do the same 100gb/s,which helps with gpu-gpu scaling
but cpu-gpu we're not in any danger of running into bottlenecks on pci-e 3.0 and if we do pci-e 4.0 is already here
RAM to VRAM copies still are too slow.

OMG the same CRAP all over again. Nvidia has had the SAME architecture for over 10 years now buddy boy. I mean their latest Turing is essentially the same architecture from their GTX 200 days.

The whole old and tiresome and fake and propagandist fake news that AMD's is using the same architecture and somehow Nvidia magically has new architecture every time is the most absurd propaganda I've ever seen on the internet. Fact of the matter is that Turing literally has roots and bottom down is the same architecture as their GTX 200 architecture from over 10 years ago!

IF anything AMD has changed architecture a lot more times and even their GCN architecture has seen major changed with each revision. GCN 1 to GCN 4 is virtually incomparable, even Vega was a massive redesign on GCN 4 which was already massively different from GCN 3.

What's with this fake news propaganda that somehow everything AMD does in the GPU space is GCN and is always the same, while Nvidia the saviors of the universe, the angels of heaven always magically have a new architecture every time brought from the clouds above on unicorns? Complete fake news propaganda! Its worse fake news like the decades old fake news of "AMD drivers are bad".
If GCN versions are incomparable, then explain the AMD compiler. Backwards compatibility is intact, same as in Nvidia.
"Massive redesigns" and we still have the same crappy performance and power consumption, 7 years later.
 
Joined
Aug 6, 2017
Messages
7,412 (2.77/day)
Location
Poland
System Name Purple rain
Processor 10.5 thousand 4.2G 1.1v
Motherboard Zee 490 Aorus Elite
Cooling Noctua D15S
Memory 16GB 4133 CL16-16-16-31 Viper Steel
Video Card(s) RTX 2070 Super Gaming X Trio
Storage SU900 128,8200Pro 1TB,850 Pro 512+256+256,860 Evo 500,XPG950 480, Skyhawk 2TB
Display(s) Acer XB241YU+Dell S2716DG
Case P600S Silent w. Alpenfohn wing boost 3 ARGBT+ fans
Audio Device(s) K612 Pro w. FiiO E10k DAC,W830BT wireless
Power Supply Superflower Leadex Gold 850W
Mouse G903 lightspeed+powerplay,G403 wireless + Steelseries DeX + Roccat rest
Keyboard HyperX Alloy SilverSpeed (w.HyperX wrist rest),Razer Deathstalker
Software Windows 10
Benchmark Scores A LOT
RAM to VRAM copies still are too slow.


If GCN versions are incomparable, then explain the AMD compiler. Backwards compatibility is intact, same as in Nvidia.
"Massive redesigns" and we still have the same crappy performance and power consumption, 7 years later.
fact is,all you have to do is look at cases where gcn 1 struggled.polaris and vega get hit in the same way.he's partially right,in saying that there have been improvements.but at its core gcn is still gcn.
nvidia changes incrementally too,but mostly to improve where the prev gen struggled.with turing they got fp32+int32 execution,half precision and finally a proper async support.otherwise,e.g. looking at pascal and maxwell,the architecture did not need any major overhauls.
 

FordGT90Concept

"I go fast!1!11!1!"
Joined
Oct 13, 2008
Messages
26,259 (4.46/day)
Location
IA, USA
System Name BY-2021
Processor AMD Ryzen 7 5800X (65w eco profile)
Motherboard MSI B550 Gaming Plus
Cooling Scythe Mugen (rev 5)
Memory 2 x Kingston HyperX DDR4-3200 32 GiB
Video Card(s) AMD Radeon RX 7900 XT
Storage Samsung 980 Pro, Seagate Exos X20 TB 7200 RPM
Display(s) Nixeus NX-EDG274K (3840x2160@144 DP) + Samsung SyncMaster 906BW (1440x900@60 HDMI-DVI)
Case Coolermaster HAF 932 w/ USB 3.0 5.25" bay + USB 3.2 (A+C) 3.5" bay
Audio Device(s) Realtek ALC1150, Micca OriGen+
Power Supply Enermax Platimax 850w
Mouse Nixeus REVEL-X
Keyboard Tesoro Excalibur
Software Windows 10 Home 64-bit
Benchmark Scores Faster than the tortoise; slower than the hare.
Why would they claim claim 2070 performance for $499 when you can already get a 2070 for $499? Maybe 2080 performance for $499 but AMD releasing something that actually has 2080 level performance across the board seems like a pipe dream at this point. It also would make zero sense if they are claiming that Vega VII will still be the top of their stack.
RTX 2070 has ~10.8 billion transistors compared to RX 5700 having ~10.2 billion transistors, PCB costs for Turing are known to be high, and Navi being on 7nm versus Turing on 12nm likely means less wattage so Navi doesn't need as much money spent on the HSF. If AMD can match or slightly exceed RTX 2070 performance at the same price point, AMD is still making more money per unit shipped simply because it's cheaper to produce.

Navi is going to be very competitive in every way against Turing because it doesn't waste transistors on tensor cores and it's a smaller node. The concern I have is what does NVIDIA's 7nm product look like?
 
Joined
Apr 21, 2010
Messages
5,731 (1.07/day)
Location
West Midlands. UK.
System Name Ryzen Reynolds
Processor Ryzen 1600 - 4.0Ghz 1.415v - SMT disabled
Motherboard mATX Asrock AB350m AM4
Cooling Raijintek Leto Pro
Memory Vulcan T-Force 16GB DDR4 3000 16.18.18 @3200Mhz 14.17.17
Video Card(s) Sapphire Nitro+ 4GB RX 580 - 1450/2000 BIOS mod 8-)
Storage Seagate B'cuda 1TB/Sandisk 128GB SSD
Display(s) Acer ED242QR 75hz Freesync
Case Corsair Carbide Series SPEC-01
Audio Device(s) Onboard
Power Supply Corsair VS 550w
Mouse Zalman ZM-M401R
Keyboard Razor Lycosa
Software Windows 10 x64
Benchmark Scores https://www.3dmark.com/spy/6220813
RTX 2070 has ~10.8 billion transistors compared to RX 5700 having ~10.2 billion transistors, PCB costs for Turing are known to be high, and Navi being on 7nm versus Turing on 12nm likely means less wattage so Navi doesn't need as much money spent on the HSF. If AMD can match or slightly exceed RTX 2070 performance at the same price point, AMD is still making more money per unit shipped simply because it's cheaper to produce.

Navi is going to be very competitive in every way against Turing because it doesn't waste transistors on tensor cores and it's a smaller node. The concern I have is what does NVIDIA's 7nm product look like?
Though RVII is 7nm and that didn't set the world alight and depending where you look it fairs anywhere just above a 2070 and in some cases competing with 2080 and everywhere in between, it has been said RVII will remain their flagship (not sure how true or accurate) but if that's the case, what can navi do at 7nm that RVII failed at as it's still questionably loud and hot and uses a lot of power compared to Turing, so I don't see how navi will fair any better. I see it being similar to RVII in that sense in that it might trade blows with the 2070 in some cases but will likely be between that and the 2660 and priced accordingly, which isn't all that bad.
 

FordGT90Concept

"I go fast!1!11!1!"
Joined
Oct 13, 2008
Messages
26,259 (4.46/day)
Location
IA, USA
System Name BY-2021
Processor AMD Ryzen 7 5800X (65w eco profile)
Motherboard MSI B550 Gaming Plus
Cooling Scythe Mugen (rev 5)
Memory 2 x Kingston HyperX DDR4-3200 32 GiB
Video Card(s) AMD Radeon RX 7900 XT
Storage Samsung 980 Pro, Seagate Exos X20 TB 7200 RPM
Display(s) Nixeus NX-EDG274K (3840x2160@144 DP) + Samsung SyncMaster 906BW (1440x900@60 HDMI-DVI)
Case Coolermaster HAF 932 w/ USB 3.0 5.25" bay + USB 3.2 (A+C) 3.5" bay
Audio Device(s) Realtek ALC1150, Micca OriGen+
Power Supply Enermax Platimax 850w
Mouse Nixeus REVEL-X
Keyboard Tesoro Excalibur
Software Windows 10 Home 64-bit
Benchmark Scores Faster than the tortoise; slower than the hare.
RTX 2080 = 13.6 billion transistors
Radeon VII = 13.23 billion transistors

Radeon VII in Navi, if the architecture scales well, would have matched or slightly exceeded RTX 2080 in performance. There has to be some serious architectural improvements in Navi to make it more competitive with Turing.

Yeah, Radeon VII is going to be AMD's top product until Arcturus debuts. Thing has 16 GiB HBM2.

AMD wouldn't have matched RTX 2070 pricing if they weren't certain it's a better performance/value proposition because of AMD's lower mindshare. For that reason, I think Radeon VII and RX 5700 are very similar in performance but RX 5700 has half the memory. AMD also doesn't want RX 5700 to cannibalize Radeon VII sales so RX 5700 likely isn't the best Navi can do--it's the best AMD will allow until Arcturus debuts and Radeon VII inventory is cleared. The fact they're not launching an RX 5800 now is proof they're holding something back. I wouldn't be surprised at all if the silicon that would be RX 5800 is what is going into the PlayStation 5.


Eight more days to E3 announcement which should focus on RX 5000 series.
 
Last edited:
Joined
Feb 8, 2017
Messages
228 (0.08/day)
Though RVII is 7nm and that didn't set the world alight and depending where you look it fairs anywhere just above a 2070 and in some cases competing with 2080 and everywhere in between, it has been said RVII will remain their flagship (not sure how true or accurate) but if that's the case, what can navi do at 7nm that RVII failed at as it's still questionably loud and hot and uses a lot of power compared to Turing, so I don't see how navi will fair any better. I see it being similar to RVII in that sense in that it might trade blows with the 2070 in some cases but will likely be between that and the 2660 and priced accordingly, which isn't all that bad.
Lisa Su directly addressed this at Computex. Their most expensive GPU is going to remain Radeon 7 for now, literally her words. That leaves room for a RTX 2080 competitor that is cheaper. So again, she worded the answer very carefully. Do I think Navi has a RTX 2080 competitor? Yes. Do I think its going to be released on July 7th? Not sure. I think its more likely they are going to release RX 5070, RX 5060, RX 5060PRO and RX 5050 first.

Later on we are likely to see RX 5080 and RX 5040 to complete their lineup. Their RX 5070 is also really small, and if it performs that good at that size, it could be that AMD has a bigger die that can compete with RX 2080 and RX 2080ti.

According to AMD its 25% faster than Vega and 50% more power efficient. That would mean at 4096 cores it performs 25% faster and has half the power consumption, which would put it at around 150W for an RTX 2070 type performance.
 
Joined
Apr 21, 2010
Messages
5,731 (1.07/day)
Location
West Midlands. UK.
System Name Ryzen Reynolds
Processor Ryzen 1600 - 4.0Ghz 1.415v - SMT disabled
Motherboard mATX Asrock AB350m AM4
Cooling Raijintek Leto Pro
Memory Vulcan T-Force 16GB DDR4 3000 16.18.18 @3200Mhz 14.17.17
Video Card(s) Sapphire Nitro+ 4GB RX 580 - 1450/2000 BIOS mod 8-)
Storage Seagate B'cuda 1TB/Sandisk 128GB SSD
Display(s) Acer ED242QR 75hz Freesync
Case Corsair Carbide Series SPEC-01
Audio Device(s) Onboard
Power Supply Corsair VS 550w
Mouse Zalman ZM-M401R
Keyboard Razor Lycosa
Software Windows 10 x64
Benchmark Scores https://www.3dmark.com/spy/6220813
Lisa Su directly addressed this at Computex. Their most expensive GPU is going to remain Radeon 7 for now, literally her words. That leaves room for a RTX 2080 competitor that is cheaper. So again, she worded the answer very carefully. Do I think Navi has a RTX 2080 competitor? Yes. Do I think its going to be released on July 7th? Not sure. I think its more likely they are going to release RX 5070, RX 5060, RX 5060PRO and RX 5050 first.

Later on we are likely to see RX 5080 and RX 5040 to complete their lineup. Their RX 5070 is also really small, and if it performs that good at that size, it could be that AMD has a bigger die that can compete with RX 2080 and RX 2080ti.

According to AMD its 25% faster than Vega and 50% more power efficient. That would mean at 4096 cores it performs 25% faster and has half the power consumption, which would put it at around 150W for an RTX 2070 type performance.
You're reading way in between the lines here, the most expensive GPU is going to remain the RVII but yet you think they will release a navi product that's cheaper and faster for less money? 25% faster than vega isn't quite good enough unless you're talking IPC or per clock
 

FordGT90Concept

"I go fast!1!11!1!"
Joined
Oct 13, 2008
Messages
26,259 (4.46/day)
Location
IA, USA
System Name BY-2021
Processor AMD Ryzen 7 5800X (65w eco profile)
Motherboard MSI B550 Gaming Plus
Cooling Scythe Mugen (rev 5)
Memory 2 x Kingston HyperX DDR4-3200 32 GiB
Video Card(s) AMD Radeon RX 7900 XT
Storage Samsung 980 Pro, Seagate Exos X20 TB 7200 RPM
Display(s) Nixeus NX-EDG274K (3840x2160@144 DP) + Samsung SyncMaster 906BW (1440x900@60 HDMI-DVI)
Case Coolermaster HAF 932 w/ USB 3.0 5.25" bay + USB 3.2 (A+C) 3.5" bay
Audio Device(s) Realtek ALC1150, Micca OriGen+
Power Supply Enermax Platimax 850w
Mouse Nixeus REVEL-X
Keyboard Tesoro Excalibur
Software Windows 10 Home 64-bit
Benchmark Scores Faster than the tortoise; slower than the hare.
According to AMD its 25% faster than Vega and 50% more power efficient. That would mean at 4096 cores it performs 25% faster and has half the power consumption, which would put it at around 150W for an RTX 2070 type performance.
Those numbers work in terms of transistors. If Radeon VII were cut down to 12.75 billion transistors, it would be about where RX 5700 is.
 
Joined
Mar 23, 2016
Messages
4,844 (1.53/day)
Processor Core i7-13700
Motherboard MSI Z790 Gaming Plus WiFi
Cooling Cooler Master RGB something
Memory Corsair DDR5-6000 small OC to 6200
Video Card(s) XFX Speedster SWFT309 AMD Radeon RX 6700 XT CORE Gaming
Storage 970 EVO NVMe M.2 500GB,,WD850N 2TB
Display(s) Samsung 28” 4K monitor
Case Phantek Eclipse P400S
Audio Device(s) EVGA NU Audio
Power Supply EVGA 850 BQ
Mouse Logitech G502 Hero
Keyboard Logitech G G413 Silver
Software Windows 11 Professional v23H2
Oh so you remember well R600? Navi don't have 512-bit memory bus width, nor does it have huge die size. So I don't see what Navi have something in common with 2900XT. Why don't we wait before it comes out
Forgot the ring bus on die.
 
Top