• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

AMD FSR 2.0 Quality & Performance

Joined
May 3, 2018
Messages
2,881 (1.20/day)
Now last year Lisa Su said something along the lines we could expect FSR hardware acceleration in RDNA3, so we might see large gains in performance compared to RDNA2, although how you equalise the performance for comparison other than through clocks I don't know
 
Joined
Mar 21, 2016
Messages
2,508 (0.79/day)
Joined
Jan 2, 2009
Messages
9,899 (1.70/day)
Location
Essex, England
System Name My pc
Processor Ryzen 5 3600
Motherboard Asus Rog b450-f
Cooling Cooler master 120mm aio
Memory 16gb ddr4 3200mhz
Video Card(s) MSI Ventus 3x 3070
Storage 2tb intel nvme and 2tb generic ssd
Display(s) Generic dell 1080p overclocked to 75hz
Case Phanteks enthoo
Power Supply 650w of borderline fire hazard
Mouse Some wierd Chinese vertical mouse
Keyboard Generic mechanical keyboard
Software Windows ten
AMD's mistake is that it answers these dirty initiatives by nvidia. Tessellation, and now RT... Do you remember when nvidia paid a game developer to REMOVE the DX 10.1 implementation (Assassin's Creed DX10.1) in which the Radeons were better?
AMD introduced tesselation before Nvidia, think it was on HD 3000 series cards. Had a weird frog dude demo to show it off.
 
Joined
Jun 6, 2016
Messages
5 (0.00/day)
This looks very promising and being GPU platform agnostic/open source is great news. I went for an Nvidia laptop (during the GPU shortage) and DLSS was a big reason for that, but now I might stay with AMD for my next desktop upgrade if they manage to nail RT performance with RDNA3.

I really can't believe some of the massively ill-informed comments in this thread though. A lot of you might want to fact check yourself before you claim that current gen consoles can't do RT or that they only do RT reflections, etc.
 
Joined
Jan 5, 2008
Messages
158 (0.03/day)
Processor Intel Core i7-975 @ 4.4 GHz
Motherboard ASUS Rampage III GENE
Cooling Noctua NH-D14
Memory 3x4 GB GeIL Enhance Plus 1750 MHz CL 9
Video Card(s) ASUS Radeon HD 7970 3 GB
Storage Samsung F2 500 GB
Display(s) Samsung SyncMaster 2243LNX
Case Antec Twelve Hundred V3
Audio Device(s) VIA VT2020
Power Supply Enermax Platimax 1000 W Special OC Edition
Software Microsoft Windows 7 Ultimate SP1
AMD introduced tesselation before Nvidia, think it was on HD 3000 series cards. Had a weird frog dude demo to show it off.
They used to call it TruForm and it actually dates back to the Radeon 8500 which was released in 2001.
 
Joined
Feb 1, 2019
Messages
3,607 (1.69/day)
Location
UK, Midlands
System Name Main PC
Processor 13700k
Motherboard Asrock Z690 Steel Legend D4 - Bios 13.02
Cooling Noctua NH-D15S
Memory 32 Gig 3200CL14
Video Card(s) 4080 RTX SUPER FE 16G
Storage 1TB 980 PRO, 2TB SN850X, 2TB DC P4600, 1TB 860 EVO, 2x 3TB WD Red, 2x 4TB WD Red
Display(s) LG 27GL850
Case Fractal Define R4
Audio Device(s) Soundblaster AE-9
Power Supply Antec HCG 750 Gold
Software Windows 10 21H2 LTSC
It is good that this is finally getting on dlss standard while not requiring any more die space. Imagine the costs Nvidia could save by not having to include tensor cores (or having much less) while still offering the same stuff. Or they just find more uses for it than just DLSS and some pro stuff, on a seperate die for that MCM future :D
Save lots of money but maybe also sell less GPUs.

I always felt these special cores were there to try and lock people into Nvidia as a vendor. Although I have used Nvidia for many years and currently have a 3080, I have never been a fan of RT, DLSS is ok but is weak in that it requires game devs to support it alongside the special cores.

DLSS/RT hardware cost may be the reason why AMD can add more VRAM and compete at similar price points.
 
Joined
Feb 20, 2022
Messages
175 (0.17/day)
System Name Custom Watercooled
Processor 10900k 5.1GHz SSE 5.0GHz AVX
Motherboard Asus Maximus XIII hero z590
Cooling XSPC Raystorm Pro, XSPC D5 Vario, EK Water Blocks EK-CoolStream XE 360 (Triple Fan) Radiator
Memory Team Group 8Pack RIPPED Edition TDPPD416G3600HC14CDC01 @ DDR4-4000 CL15 Dual Rank 4x8GB (32GB)
Video Card(s) KFA2 GeForce RTX 3080 Ti SG 1-Click OC 12GB LHR GDDR6X PCI-Express Graphics Card
Storage WD Blue SN550 1TB NVME M.2 2280 PCIe Gen3 Solid State Drive (WDS100T2B0C)
Display(s) LG 3D TV 32LW450U-ZB and Samsung U28D590
Case Full Tower Case
Audio Device(s) ROG SupremeFX 7.1 Surround Sound High Definition Audio CODEC ALC4082, ESS SABRE9018Q2C DAC/AMP
Power Supply Corsair AX1000 Titanium 80 Plus Titanium Power Supply
Mouse Logitech G502SE
Keyboard Logitech Y-BP62a
Software Windows 11 Pro
Benchmark Scores https://valid.x86.fr/2rdbdl https://www.3dmark.com/spy/27927340 https://ibb.co/YjQFw5t
Some fine detail render better on nv, some better on amd, he brought up points like the wires for the baloons in movement were aliased on nvidia but not on amd and a metal fence on top of a building was better on nvidia.

the edge for fine detail is as you say to nvidia, but for in motion I didn't hear or hear a clear winner.

I need to bring out kekler 780TI and 7970 and see how they run, or if they run at all :D
FSR 1.0 worked on the 7970 just fine, which I think is really the gamechanger for these technologies.. they just run and work
I quote from the video. They state in motion DLSS was better, the image was more stable and had more fine detail. In the still image DLSS had more fine detail. This makes 100% sense given the more processing power available to DLSS via the tensor cores. FSR 2 looks completely usable and not too bad really. FSR 1 the original DLSS killer was complete garbage for image quality.

His System
R9 5800x3d
XFX RX 6800 Speedster

AMD introduced tesselation before Nvidia, think it was on HD 3000 series cards. Had a weird frog dude demo to show it off.
ATI Radeon 8500 in 2001. Then nvidia had better performance, so their were special tesselation benchmarks. There were complaints because AMD cards were hit harder performance wise.

However, with all of its geometric detail, the DX11 upgraded version of Crysis 2 now manages to push that envelope. The guys at Hardware.fr found that enabling tessellation dropped the frame rates on recent Radeons by 31-38%. The competing GeForces only suffered slowdowns of 17-21%.

Radeon owners do have some recourse, thanks to the slider in newer Catalyst drivers that allows the user to cap the tessellation factor used by games. Damien advises users to choose a limit of 16 or 32, well below the peak of 64.

As a publication that reviews GPUs, we have some recourse, as well. One of our options is to cap the tessellation factor on Radeon cards in future testing.

Yet another is to exclude Crysis 2 results from our overall calculation of performance for our value scatter plots, as we’ve done with HAWX 2 in the past.
- source

So the too much tesselation began and was corrected by messing with driver settings and cherry picking benchmarks. All so Radeon cards could perform better. Now like then, Ray Tracing is too heavy on Radeon cards and this needs to be another "special treatment".

Thus began the, "People want a “better” implementation of DX11 and thus Tessellation."

Then there was you cant use PhysX results because nvidia was so far ahead. Thus came the same attacks against it. Its not open source, they dont work on the cpu code.

Physx is a joke, and a detriment to the community, so I don’t get why you are bothering to defend it. Whether or not Physx had potential doesn’t matter as long as it’s being manipulated to sell video cards. You should support open alternatives instead.
That would be non ATI/AMD video cards being sold. Open alternatives instead. Sound like DLSS vs FSR.

The same with DLSS but NVidia won that for now. The same with ray tracing. Thus innovations gets destroyed. Just so a Radeon card can perform better. Then get accepted once Radeon performance get parity with nvidia.

source

  • l33t-g4m3r
  • 11 years ago
Trolls trolling trolls. If you don’t like my post editing, don’t argue with me. I’m just clarifying my views and fixing grammar. Neither of you have anything good to say anyway, since your points depend on shaky evidence, or don’t matter since Ageia is defunct. Once you look at the whole picture, the arguments fall through. All I gotta do is point out the hole in the boat, and voila, it sinks.
Physx is a joke, and a detriment to the community, so I don’t get why you are bothering to defend it. The cat’s been out of the bag for a while too, so arguing about it now is like beating a dead horse.
Whether or not Physx had potential doesn’t matter as long as it’s being manipulated to sell video cards. You should support open alternatives instead.
 
Last edited:
Joined
May 31, 2016
Messages
4,437 (1.43/day)
Location
Currently Norway
System Name Bro2
Processor Ryzen 5800X
Motherboard Gigabyte X570 Aorus Elite
Cooling Corsair h115i pro rgb
Memory 32GB G.Skill Flare X 3200 CL14 @3800Mhz CL16
Video Card(s) Powercolor 6900 XT Red Devil 1.1v@2400Mhz
Storage M.2 Samsung 970 Evo Plus 500MB/ Samsung 860 Evo 1TB
Display(s) LG 27UD69 UHD / LG 27GN950
Case Fractal Design G
Audio Device(s) Realtec 5.1
Power Supply Seasonic 750W GOLD
Mouse Logitech G402
Keyboard Logitech slim
Software Windows 10 64 bit
HUB stated that the DLSS wins over FSR2.0 in one case with better picture quality and losses in other or are equal for both. The clear advantage for DLSS in comparison to FSR2.0, DLSS is a notch faster up to 6%. On the FSR2.0 we have support for all graphics cards there are and that is a huge advantage.
 
Joined
Feb 20, 2022
Messages
175 (0.17/day)
System Name Custom Watercooled
Processor 10900k 5.1GHz SSE 5.0GHz AVX
Motherboard Asus Maximus XIII hero z590
Cooling XSPC Raystorm Pro, XSPC D5 Vario, EK Water Blocks EK-CoolStream XE 360 (Triple Fan) Radiator
Memory Team Group 8Pack RIPPED Edition TDPPD416G3600HC14CDC01 @ DDR4-4000 CL15 Dual Rank 4x8GB (32GB)
Video Card(s) KFA2 GeForce RTX 3080 Ti SG 1-Click OC 12GB LHR GDDR6X PCI-Express Graphics Card
Storage WD Blue SN550 1TB NVME M.2 2280 PCIe Gen3 Solid State Drive (WDS100T2B0C)
Display(s) LG 3D TV 32LW450U-ZB and Samsung U28D590
Case Full Tower Case
Audio Device(s) ROG SupremeFX 7.1 Surround Sound High Definition Audio CODEC ALC4082, ESS SABRE9018Q2C DAC/AMP
Power Supply Corsair AX1000 Titanium 80 Plus Titanium Power Supply
Mouse Logitech G502SE
Keyboard Logitech Y-BP62a
Software Windows 11 Pro
Benchmark Scores https://valid.x86.fr/2rdbdl https://www.3dmark.com/spy/27927340 https://ibb.co/YjQFw5t
HUB stated that the DLSS wins over FSR2.0 in one case with better picture quality and losses in other or are equal for both. The clear advantage for DLSS in comparison to FSR2.0, DLSS is a notch faster up to 6%. On the FSR2.0 we have support for all graphics cards there are and that is a huge advantage.
Open and support for all gpu is an old amd PR trick. This was used to attack Physx which gave a massive performance uplift, just not for AMD. AMD uses it to kill off innovations that affect its performance.
 
Joined
May 12, 2022
Messages
54 (0.06/day)
I quote from the video. They state in motion DLSS was better, the image was more stable and had more fine detail. In the still image DLSS had more fine detail. This makes 100% sense given the more processing power available to DLSS via the tensor cores. FSR 2 looks completely usable and not too bad really. FSR 1 the original DLSS killer was complete garbage for image quality.

His System
R9 5800x3d
XFX RX 6800 Speedster


ATI Radeon 8500 in 2001. Then nvidia had better performance, so their were special tesselation benchmarks. There were complaints because AMD cards were hit harder performance wise.





So the too much tesselation began and was corrected by messing with driver settings and cherry picking benchmarks. All so Radeon cards could perform better. Now like then, Ray Tracing is too heavy on Radeon cards and this needs to be another "special treatment".

Thus began the, "People want a “better” implementation of DX11 and thus Tessellation."

Then there was you cant use PhysX results because nvidia was so far ahead. Thus came the same attacks against it. Its not open source, they dont work on the cpu code.


That would be non ATI/AMD video cards being sold. Open alternatives instead. Sound like DLSS vs FSR.

The same with DLSS but NVidia won that for now. The same with ray tracing. Thus innovations gets destroyed. Just so a Radeon card can perform better. Then get accepted once Radeon performance get parity with nvidia.

source
Some HUGE gaps in time there and some weird remembering of how things played out.

Tessellation was first intro'ed in hardware in 2001 by ATi with the 8500, as TruForm, but tessellation didn't see wide use till years and many DX versions later when it become programmable vs fixed. Then when nVidia got a performance advantage they used GameWorks to get developers to implement nVidia's own coded and implemented effects that used tessellation. The problem however is those effects used tessellation in extremely over the top ways. But because it was black box code from nVidia, developers couldn't optimize or adjust it's performance. The only thing ATi/AMD could do about it in the short term, was control the tessellation levels driver side.

PhysX was BS. nVidia bought it and locked it's hardware acceleration to CUDA. They also blocked using a dedicated GeForce card for it if another other vendor card was detected in the system. Further more they purposefully didn't optmise the CPU acceleration, making the effects only peform well with CUDA acceleration. And later all but abandoned the hardware accelerated functionality in favour of wider adoption as a general physics engine that ran CPU side anyway.

The rest on innovation getting destroyed is kinda just hyperbole.
 
Joined
Feb 20, 2022
Messages
175 (0.17/day)
System Name Custom Watercooled
Processor 10900k 5.1GHz SSE 5.0GHz AVX
Motherboard Asus Maximus XIII hero z590
Cooling XSPC Raystorm Pro, XSPC D5 Vario, EK Water Blocks EK-CoolStream XE 360 (Triple Fan) Radiator
Memory Team Group 8Pack RIPPED Edition TDPPD416G3600HC14CDC01 @ DDR4-4000 CL15 Dual Rank 4x8GB (32GB)
Video Card(s) KFA2 GeForce RTX 3080 Ti SG 1-Click OC 12GB LHR GDDR6X PCI-Express Graphics Card
Storage WD Blue SN550 1TB NVME M.2 2280 PCIe Gen3 Solid State Drive (WDS100T2B0C)
Display(s) LG 3D TV 32LW450U-ZB and Samsung U28D590
Case Full Tower Case
Audio Device(s) ROG SupremeFX 7.1 Surround Sound High Definition Audio CODEC ALC4082, ESS SABRE9018Q2C DAC/AMP
Power Supply Corsair AX1000 Titanium 80 Plus Titanium Power Supply
Mouse Logitech G502SE
Keyboard Logitech Y-BP62a
Software Windows 11 Pro
Benchmark Scores https://valid.x86.fr/2rdbdl https://www.3dmark.com/spy/27927340 https://ibb.co/YjQFw5t
Some HUGE gaps in time there and some weird remembering of how things played out.

Tessellation was first intro'ed in hardware in 2001 by ATi with the 8500, as TruForm, but tessellation didn't see wide use till years and many DX versions later when it become programmable vs fixed. Then when nVidia got a performance advantage they used GameWorks to get developers to implement nVidia's own coded and implemented effects that used tessellation. The problem however is those effects used tessellation in extremely over the top ways. But because it was black box code from nVidia, developers couldn't optimize or adjust it's performance. The only thing ATi/AMD could do about it in the short term, was control the tessellation levels driver side.

PhysX was BS. nVidia bought it and locked it's hardware acceleration to CUDA. They also blocked using a dedicated GeForce card for it if another other vendor card was detected in the system. Further more they purposefully didn't optmise the CPU acceleration, making the effects only peform well with CUDA acceleration. And later all but abandoned the hardware accelerated functionality in favour of wider adoption as a general physics engine that ran CPU side anyway.

The rest on innovation getting destroyed is kinda just hyperbole.
It was called over-tessellation because AMD cards could not handle the feature well. So there was a massive misinformation campaign about how the feature was over used in games. Thus it was justified to reduce tessellation settings. The reality was this was an issue with AMD's performance and nvidia had no problems. AMD brought out hacks in their drivers to restore performance. Yes restore performance. lol

Remember when DXR was nothing but a fad and you should just get a 10 series card or stay on a 10 series card. Get a 1660 card. Don't buy a 20 series cards. You cant see the difference between RT and raster anyway. DLSS is all blurred and useless. Did you play Control and Metro Exodus because of that lie in raster mode. Just because you never got it was untrue. Did you get it was a con as well.

PhysX was amazing, if you played the games that supported it. You could use your old nvidia card for it. I remember it in Fallout 4, Mirror's Edge, Star Citizen and the batman games. The metro series games and others. It was you who got con'ed into thinking it was crap and all the arguments against it. You're still so invested in that con that you still cant admit it to yourself. Also PhysX used to affect the physics score for the 3dmark (2011?)benchmark. This ment that nvidia cards all had the best overall scores which cause a hatred from AMD owners like you would not believe. This started the attack on PhysX to protect AMD and their lack of inovation. So why did it disappear? Direct Physics is officially a part of DirectX12. This was to use Havok Physics but disappeared from the pages of history afterwards. NVidia GameWorks. List of games, not complete. Note the witcher 3. Note the use of the term PhysX.

"We have invested over 500 engineering-years of work to deliver the most comprehensive platform for developing DirectX 12 games, including the world's most advanced physics simulation engine," said Tony Tamasi, senior vice president of content and technology at NVIDIA. "These resources will ensure that GeForce gamers can enjoy the very best game experience on DirectX 12 titles, just as they have on DirectX 11 games."

Remember DLSS wont catch on, its closed and only supports nvidia cards. FSR 1 was better than DLSS (all versions of some) and open sourced, lets not forget the videos now showing FSR 1 behind FSR 2 and FSR 2 not quite as good as DLSS.

Remember when the AMD 6000 series was a nvidia killer, yet nvidia 30 series basically now controls the market. There are more cards with tensor cores than ever. DLSS support wont be a problem for most gamers. Afterall they bought either 20 and 30 series cards. Not bad for a RTX fad that will pass, its now DX12u(so mush for being a FAD that will pass). How does FSR 2 help then again by being open for any gpu? Afterall if you really need FSR 2 for an old gpu, then maybe upgrade. If the market is anything to go by, they will upgrade to a nvidia gpu with tensor cores and thus get DLSS support.

This hyperbole goes on forever.

Stop believing this non sense then trying to convince others. I am tired of it.
 
Last edited:
Joined
Aug 21, 2015
Messages
1,725 (0.51/day)
Location
North Dakota
System Name Office
Processor Ryzen 5600G
Motherboard ASUS B450M-A II
Cooling be quiet! Shadow Rock LP
Memory 16GB Patriot Viper Steel DDR4-3200
Video Card(s) Gigabyte RX 5600 XT
Storage PNY CS1030 250GB, Crucial MX500 2TB
Display(s) Dell S2719DGF
Case Fractal Define 7 Compact
Power Supply EVGA 550 G3
Mouse Logitech M705 Marthon
Keyboard Logitech G410
Software Windows 10 Pro 22H2
It was called over-tessellation because AMD cards could not handle the feature well. So there was a massive misinformation campaign about how the feature was over used in games. Thus it was justified to reduce tessellation settings. The reality was this was issue an AMD's performance and nvidia had no problems. AMD brought out hacks in their drivers to restore performance. Yes restore performance. lol

Remember when DXR was nothing but a fad and you should just get a 10 series card or stay on a 10 series card. Get a 1660 card. Don't buy a 20 series cards. You cant see the difference between RT and raster anyway. DLSS is all blurred and useless. Did you play Control and Metro Exodus because of that lie in raster mode. Just because you never got it was untrue. Did you get it was a con as well.

PhysX was amazing, if you played the games that supported it. You could use your old nvidia card for it. I remember it in Fallout 4, Mirror's Edge, Star Citizen and the batman games. The metro series games and others. It was you who got con'ed into thinking it was crap and all the arguments against it. You're still so invested in that con that you still cant admit it to yourself. Also PhysX used to affect the physics score for the 3dmark (2011?)benchmark. This ment that nvidia cards all had the best overall scores which cause a hatred from AMD owners like you would not believe. This started the attack on PhysX to protect AMD and their lack of inovation. So why did it disappear? Direct Physics is officially a part of DirectX12. NVidia GameWorks. List of games, not complete. Note the witcher 3. Note the use of the term PhysX.



Remember DLSS wont catch on, its closed and only supports nvidia cards. FSR 1 was better than DLSS (all versions of some) and open sourced, lets not forget the videos now showing FSR 1 behind FSR 2 and FSR 2 not quite as good as DLSS.

Remember when the AMD 6000 series was a nvidia killer, yet nvidia 30 series basically now controls the market. There are more cards with tensor cores than ever. DLSS support wont be a problem for most gamers. Afterall they bought either 20 and 30 series cards. Not bad for a RTX fad that will pass, its now DX12u(so mush for being a FAD that will pass). How does FSR 2 help then again by being open for any gpu? Afterall if you really need FSR 2 for an old gpu, then maybe upgrade. If the market is anything to go by, they will upgrade to a nvidia gpu with tensor cores and thus get DLSS support.

This hyperbole goes on forever.

Stop believing this non sense then trying to convince others. I am tired of it.

Nvidia has basically controlled the market for at least ten years. 6000 series had no chance to be a "killer" of anything; the mindshare, production and distribution for AMD simply weren't there. What it did manage, though, was superior efficiency and price/performance in certain cases. Except in ray tracing, of course, which seems to be the only thing you care about. Correct me if I'm wrong, but do ray-traced engines not still run on a raster core? In any event, raster performance is still very relevant. And what the AMD partisans said about FSR 1.0 and tessellation doesn't particularly matter anymore.

Let's forcus on the subject at hand, then: Based on the information available RIGHT NOW, FSR 2 has the potential to give DLSS a run for its money. That's it. RT doesn't enter into it. FSR 1 doesn't enter into it. Tesselation doesn't enter into it.

Seriously, it's like AMD ran over your dog and then Nvidia gave you a puppy or something. Chill out.
 
Joined
Feb 20, 2022
Messages
175 (0.17/day)
System Name Custom Watercooled
Processor 10900k 5.1GHz SSE 5.0GHz AVX
Motherboard Asus Maximus XIII hero z590
Cooling XSPC Raystorm Pro, XSPC D5 Vario, EK Water Blocks EK-CoolStream XE 360 (Triple Fan) Radiator
Memory Team Group 8Pack RIPPED Edition TDPPD416G3600HC14CDC01 @ DDR4-4000 CL15 Dual Rank 4x8GB (32GB)
Video Card(s) KFA2 GeForce RTX 3080 Ti SG 1-Click OC 12GB LHR GDDR6X PCI-Express Graphics Card
Storage WD Blue SN550 1TB NVME M.2 2280 PCIe Gen3 Solid State Drive (WDS100T2B0C)
Display(s) LG 3D TV 32LW450U-ZB and Samsung U28D590
Case Full Tower Case
Audio Device(s) ROG SupremeFX 7.1 Surround Sound High Definition Audio CODEC ALC4082, ESS SABRE9018Q2C DAC/AMP
Power Supply Corsair AX1000 Titanium 80 Plus Titanium Power Supply
Mouse Logitech G502SE
Keyboard Logitech Y-BP62a
Software Windows 11 Pro
Benchmark Scores https://valid.x86.fr/2rdbdl https://www.3dmark.com/spy/27927340 https://ibb.co/YjQFw5t
Nvidia has basically controlled the market for at least ten years. 6000 series had no chance to be a "killer" of anything; the mindshare, production and distribution for AMD simply weren't there. What it did manage, though, was superior efficiency and price/performance in certain cases. Except in ray tracing, of course, which seems to be the only thing you care about. Correct me if I'm wrong, but do ray-traced engines not still run on a raster core? In any event, raster performance is still very relevant. And what the AMD partisans said about FSR 1.0 and tessellation doesn't particularly matter anymore.

Let's forcus on the subject at hand, then: Based on the information available RIGHT NOW, FSR 2 has the potential to give DLSS a run for its money. That's it. RT doesn't enter into it. FSR 1 doesn't enter into it. Tesselation doesn't enter into it.

Seriously, it's like AMD ran over your dog and then Nvidia gave you a puppy or something. Chill out.
RT is the only thing the whole market cares about or did you miss the fact its center stage for the consoles and for DX12. All the 3d engines are being update or updating to use DXR. That Unreal Engine 5 brings Ray Tracing to nearly all platforms. Its you who conning yourself. FSR 2 is basically in doubt if it lasts. No one really bought an AMD 6000 series card and this is not an opinion. Its only takes a few clicks to add the DLSS plugin to Unreal Engine 5 and support most of the PC market. Unreal Engine 5 supports TSR which leaves FSR 2 well looking for a place to live. Sure AMD will pay for a few developers to use FSR 2, like with FSR 1 and its useful for AMD cards in DXR games so some big AAA titles may support it but thats really it as far as I can see. There is a small part of market that will use FSR 2 and a much bigger part (almost all the market) that will use DLSS.

As far as I can see FSR 2 is slower than DLSS. It has less fine details and is less stable. This is also more so in motion. PC world stated.

If you want to game at 1440p with FSR 2.0, you'll probably need at least a Radeon RX 5600 or Vega GPU, or a GTX 1080 or RTX 2060 on Nvidia's side—though that's not a universal truth.

So people on low end hadrware are not really going to use FSR 2 to its fullest.

Also AMD cared how well FSR 2 runs on other hardware they tuned it only for RDNA2, AMD FSR 2.0 upscaling is tuned to run faster on RDNA 2-powered graphics cards.
 
Last edited:
Joined
Jul 8, 2021
Messages
14 (0.01/day)
Digital Foundry released a very competent video about this. Makes this AMD PR article completely useless. FSR has the usual faults of this technique, from movement, to particles, to hair, to transparency and details stability. It has a much higher cost on amd cards, almost double than on Ampere. DLSS is universally better than FSR 2 and every RTX owner should chose DLSS. But it remains a success for people with older hardware.
 
Joined
Apr 12, 2013
Messages
7,536 (1.77/day)
RT is the only thing the whole market cares about or did you miss the fact its center stage for the consoles and for DX12.
Season 4 No GIF
 
Joined
Feb 20, 2022
Messages
175 (0.17/day)
System Name Custom Watercooled
Processor 10900k 5.1GHz SSE 5.0GHz AVX
Motherboard Asus Maximus XIII hero z590
Cooling XSPC Raystorm Pro, XSPC D5 Vario, EK Water Blocks EK-CoolStream XE 360 (Triple Fan) Radiator
Memory Team Group 8Pack RIPPED Edition TDPPD416G3600HC14CDC01 @ DDR4-4000 CL15 Dual Rank 4x8GB (32GB)
Video Card(s) KFA2 GeForce RTX 3080 Ti SG 1-Click OC 12GB LHR GDDR6X PCI-Express Graphics Card
Storage WD Blue SN550 1TB NVME M.2 2280 PCIe Gen3 Solid State Drive (WDS100T2B0C)
Display(s) LG 3D TV 32LW450U-ZB and Samsung U28D590
Case Full Tower Case
Audio Device(s) ROG SupremeFX 7.1 Surround Sound High Definition Audio CODEC ALC4082, ESS SABRE9018Q2C DAC/AMP
Power Supply Corsair AX1000 Titanium 80 Plus Titanium Power Supply
Mouse Logitech G502SE
Keyboard Logitech Y-BP62a
Software Windows 11 Pro
Benchmark Scores https://valid.x86.fr/2rdbdl https://www.3dmark.com/spy/27927340 https://ibb.co/YjQFw5t
Digital Foundry released a very competent video about this. Makes this AMD PR article completely useless. FSR has the usual faults of this technique, from movement, to particles, to hair, to transparency and details stability. It has a much higher cost on amd cards, almost double than on Ampere. DLSS is universally better than FSR 2 and every RTX owner should chose DLSS. But it remains a success for people with older hardware.
They stated its not a DLSS killer. DLSS has better performance and quality which is to be expected with the extra performance of tensor cores. That this is a win for people with AMD hardware. FSR 2.0 is tuned for RDNA2 hardware so it should be even slower on current nvidia hardare and older hardware in general. If you are on a RTX 2060-2070 you are going to use DLSS. The fact some reviews state a GTX 1080 for 1440p would imply older hardware is not really the goal. That this is ment to run on RDNA2 gpus and have far worse performance on others by design. That the cache on RDNA2 is what makes 4k possible with good performance and speeds up FSR 2 in general.


AMD says FSR 2.0 will still work on even older GPUs, but your mileage may vary. Due to FSR 2.0's additional computing requirements on the GPU, you might not get a performance uplift at all when using FSR 2.0 on older GPUs, especially if you try running at higher resolutions. As we showed with our GTX 970, GTX 1080, RX 480, and RX Vega 64 testing, you should expect lower performance gains if you use FSR 2.0 on older hardware.
The second cavitate is GPU support. According to AMD, there are limits to what FSR 2.0 can do on older hardware. AMD recommends an RX 5700 or RTX 2070 as a baseline for 4K upscaling with FSR 2.0. For 1440p, AMD recommends at least an RX 5600 XT or GTX 1080 for optimal performance, and at 1080p AMD recommends an RX 590 or GTX 1070.
source
So I guess older hardware is not really the focus and its just RDNA2 gpus that get the full benefit. So as NVidia going to 4k upscaling you would only use DLSS as FSR 2 is worse for you. 1440p is not really low end hardware. 1080p is outside of the GTX 1060 abilities if AMD are correct and a GTX 1070 is not really low end hardware as well.

Seems the only low end hardware could be RDNA2 APU's....

FSR 2.0 is basically as complex as DLSS 2.x now, and both need frame data, motion vectors, and depth buffers. Maybe it too will lose to FSR 1 and its easy support/lower development time.
 
Last edited:
Joined
May 12, 2022
Messages
54 (0.06/day)
It was called over-tessellation because AMD cards could not handle the feature well. So there was a massive misinformation campaign about how the feature was over used in games. Thus it was justified to reduce tessellation settings. The reality was this was an issue with AMD's performance and nvidia had no problems. AMD brought out hacks in their drivers to restore performance. Yes restore performance. lol

Remember when DXR was nothing but a fad and you should just get a 10 series card or stay on a 10 series card. Get a 1660 card. Don't buy a 20 series cards. You cant see the difference between RT and raster anyway. DLSS is all blurred and useless. Did you play Control and Metro Exodus because of that lie in raster mode. Just because you never got it was untrue. Did you get it was a con as well.

PhysX was amazing, if you played the games that supported it. You could use your old nvidia card for it. I remember it in Fallout 4, Mirror's Edge, Star Citizen and the batman games. The metro series games and others. It was you who got con'ed into thinking it was crap and all the arguments against it. You're still so invested in that con that you still cant admit it to yourself. Also PhysX used to affect the physics score for the 3dmark (2011?)benchmark. This ment that nvidia cards all had the best overall scores which cause a hatred from AMD owners like you would not believe. This started the attack on PhysX to protect AMD and their lack of inovation. So why did it disappear? Direct Physics is officially a part of DirectX12. This was to use Havok Physics but disappeared from the pages of history afterwards. NVidia GameWorks. List of games, not complete. Note the witcher 3. Note the use of the term PhysX.



Remember DLSS wont catch on, its closed and only supports nvidia cards. FSR 1 was better than DLSS (all versions of some) and open sourced, lets not forget the videos now showing FSR 1 behind FSR 2 and FSR 2 not quite as good as DLSS.

Remember when the AMD 6000 series was a nvidia killer, yet nvidia 30 series basically now controls the market. There are more cards with tensor cores than ever. DLSS support wont be a problem for most gamers. Afterall they bought either 20 and 30 series cards. Not bad for a RTX fad that will pass, its now DX12u(so mush for being a FAD that will pass). How does FSR 2 help then again by being open for any gpu? Afterall if you really need FSR 2 for an old gpu, then maybe upgrade. If the market is anything to go by, they will upgrade to a nvidia gpu with tensor cores and thus get DLSS support.

This hyperbole goes on forever.

Stop believing this non sense then trying to convince others. I am tired of it.

...where to start... I'll just do it by paragraph? and just quote the start of them to make it easier to follow. I'll leave it fully quoted above :)

"It was called over-tessellation..." No, The performance on nVidia hardware sucked as well. But was mostly playable. That was intentional, because nVidia had the performance advantage. The blackbox code was un-modifiable by the developers. So even if they wanted to adjust it to improve performance, they couldn't. You can google this, it's a known part of graphics history. Wasn't always the case though, nVidia honestly just nailed it.

"Remember when DXR was nothing but a fad..." Yeah, because a 20xx series performance wasn't really much better than the 10xx series at first and RT was barely in use yet. DLSS 1.x was a blurry ugly mess, DLSS 2.x addressed the issue. And RT was kinda poorly implemented in allot of early games. So buying hardware to use it wasn't really worth it yet. It took awhile before dev's really started to get a hang of where to use it and where not to. It was/is totally worth it in some games.

Also fun little factoid. Control's DLSS implementation didn't use the tensor cores. Often referred to as DLSS "1.9", Control later updated to DLSS 2.x and was commonly used to compare DLSS 1 and 2.

"PhysX was amazing.." Yes it was pretty damn cool. And yes you could dedicate your old card. But only if you had a nVidia GPU's in your system. And 3DMark dropped it in 2008, the year nVidia bought Ageia and locked down PhysX. You can google this history, but it basically boiled down to nVidia hardware being able to potentially cheat the PhysX powered benchmark. So 3DMark just removed it.

The rest is just regular GPU wars stuff. People should buy whatever hardware matches what they want out of it. *shrug* same as always.

*late edit*
Also Havok's GPU acceleration died when Intel bought Havok. Which btw was hardware agnostic and was demo'ed on both GeForce and Radeon hardware, the year before they where bought by Intel.
 
Joined
Aug 21, 2015
Messages
1,725 (0.51/day)
Location
North Dakota
System Name Office
Processor Ryzen 5600G
Motherboard ASUS B450M-A II
Cooling be quiet! Shadow Rock LP
Memory 16GB Patriot Viper Steel DDR4-3200
Video Card(s) Gigabyte RX 5600 XT
Storage PNY CS1030 250GB, Crucial MX500 2TB
Display(s) Dell S2719DGF
Case Fractal Define 7 Compact
Power Supply EVGA 550 G3
Mouse Logitech M705 Marthon
Keyboard Logitech G410
Software Windows 10 Pro 22H2
RT is the only thing the whole market cares about or did you miss the fact its center stage for the consoles and for DX12. All the 3d engines are being update or updating to use DXR. That Unreal Engine 5 brings Ray Tracing to nearly all platforms. Its you who conning yourself. FSR 2 is basically in doubt if it lasts. No one really bought an AMD 6000 series card and this is not an opinion. Its only takes a few clicks to add the DLSS plugin to Unreal Engine 5 and support most of the PC market. Unreal Engine 5 supports TSR which leaves FSR 2 well looking for a place to live. Sure AMD will pay for a few developers to use FSR 2, like with FSR 1 and its useful for AMD cards in DXR games so some big AAA titles may support it but thats really it as far as I can see. There is a small part of market that will use FSR 2 and a much bigger part (almost all the market) that will use DLSS.

As far as I can see FSR 2 is slower than DLSS. It has less fine details and is less stable. This is also more so in motion. PC world stated.



So people on low end hadrware are not really going to use FSR 2 to its fullest.

Also AMD cared how well FSR 2 runs on other hardware they tuned it only for RDNA2, AMD FSR 2.0 upscaling is tuned to run faster on RDNA 2-powered graphics cards.

Conning myself about what, exactly?
 
Joined
Feb 20, 2022
Messages
175 (0.17/day)
System Name Custom Watercooled
Processor 10900k 5.1GHz SSE 5.0GHz AVX
Motherboard Asus Maximus XIII hero z590
Cooling XSPC Raystorm Pro, XSPC D5 Vario, EK Water Blocks EK-CoolStream XE 360 (Triple Fan) Radiator
Memory Team Group 8Pack RIPPED Edition TDPPD416G3600HC14CDC01 @ DDR4-4000 CL15 Dual Rank 4x8GB (32GB)
Video Card(s) KFA2 GeForce RTX 3080 Ti SG 1-Click OC 12GB LHR GDDR6X PCI-Express Graphics Card
Storage WD Blue SN550 1TB NVME M.2 2280 PCIe Gen3 Solid State Drive (WDS100T2B0C)
Display(s) LG 3D TV 32LW450U-ZB and Samsung U28D590
Case Full Tower Case
Audio Device(s) ROG SupremeFX 7.1 Surround Sound High Definition Audio CODEC ALC4082, ESS SABRE9018Q2C DAC/AMP
Power Supply Corsair AX1000 Titanium 80 Plus Titanium Power Supply
Mouse Logitech G502SE
Keyboard Logitech Y-BP62a
Software Windows 11 Pro
Benchmark Scores https://valid.x86.fr/2rdbdl https://www.3dmark.com/spy/27927340 https://ibb.co/YjQFw5t
...where to start... I'll just do it by paragraph? and just quote the start of them to make it easier to follow. I'll leave it fully quoted above :)

"It was called over-tessellation..." No, The performance on nVidia hardware sucked as well. But was mostly playable. That was intentional, because nVidia had the performance advantage. The blackbox code was un-modifiable by the developers. So even if they wanted to adjust it to improve performance, they couldn't. You can google this, it's a known part of graphics history. Wasn't always the case though, nVidia honestly just nailed it.

"Remember when DXR was nothing but a fad..." Yeah, because a 20xx series performance wasn't really much better than the 10xx series at first and RT was barely in use yet. DLSS 1.x was a blurry ugly mess, DLSS 2.x addressed the issue. And RT was kinda poorly implemented in allot of early games. So buying hardware to use it wasn't really worth it yet. It took awhile before dev's really started to get a hang of where to use it and where not to. It was/is totally worth it in some games.

Also fun little factoid. Control's DLSS implementation didn't use the tensor cores. Often referred to as DLSS "1.9", Control later updated to DLSS 2.x and was commonly used to compare DLSS 1 and 2.

"PhysX was amazing.." Yes it was pretty damn cool. And yes you could dedicate your old card. But only if you had a nVidia GPU's in your system. And 3DMark dropped it in 2008, the year nVidia bought Ageia and locked down PhysX. You can google this history, but it basically boiled down to nVidia hardware being able to potentially cheat the PhysX powered benchmark. So 3DMark just removed it.

The rest is just regular GPU wars stuff. People should buy whatever hardware matches what they want out of it. *shrug* same as always.
Control used lots of DLSS versions, only version 1.9 was not tensor based and only in Control. It also looked like complete crap. I was playing Control at the time, was not happy with DLSS 1.9.

As a person that really did play with DLSS 1.x, its was bad at the start. Then you would get an update and it was magic. FSR 1 was complete garbage and could not match DLSS 1.

Evidently it was as Metro Exodus received an update that was said to improve DLSS and improve it did. Below we're including several screenshots taken in game with performance overlays included so that you can see scene for scene the performance change when running the game at 4K with DLSS on vs off (Ultra settings, Hairworks and PhysX enabled). One thing to take away here is that first impressions are hard to shake, but sometimes deserve a second look once ironed out. Let us know down in the comment section if this changes your mind on what is possible with DLSS because clearly it can improve and with the click of a button you can get comparable image quality with healthy performance gains. The hotfix updates the Steam game version to 1.0.0.1 while the Epic store version will be updated to version 1.0.1.1. source 1 source 2

UPDATE 21 FEBRUARY 2019

Build numbers:
  • Epic build (verify in-game from Main Menu Options or Pause menu) – 1.0.1.1
  • DLSS fixes and improvements to sharpness
You can tell the people who learned about DLSS 1 from propaganda and people who play games with DLSS 1. DLSS 1.0 was released on February 2019 with Metro Exodus and Battlefield V. So this massive image quality increase happened within a month of DLSS 1's release.

PhysX cards could do the same, its was not just nvidia cards. This is not cheating as there have been accelerator cards from the dawn of computer history. They too are not cheating. Afterall gpus are one type of accelerator cards. So by your argument we have all been cheating in Time Spy by installing a high end gpu and not running it all on the cpu. The only problem was AMD owners. Saying that I had two 7970's and two 290x gpus. This is the stuff that turned me off AMD. All the lies and reviews that cant be trusted.

Too much tessellation and not AMD performance. source We all know it was AMD cards, I had two 7970's and two 290x's. The reason for it being so bad is in bold. That why they were going to make changes in the drivers and ban games from benchmarks.
Unnecessary geometric detail slows down all GPUs, of course, but it just so happens to have a much larger effect on DX11-capable AMD Radeons than it does on DX11-capable Nvidia GeForces. The Fermi architecture underlying all DX11-class GeForce GPUs dedicates more attention (and transistors) to achieving high geometry processing throughput than the competing Radeon GPU architectures. We’ve seen the effect quite clearly in synthetic tessellation benchmarks. Few games have shown a similar effect, simply because they don’t push enough polygons to strain the Radeons’ geometry processing rates. However, with all of its geometric detail, the DX11 upgraded version of Crysis 2 now manages to push that envelope. The guys at Hardware.fr found that enabling tessellation dropped the frame rates on recent Radeons by 31-38%. The competing GeForces only suffered slowdowns of 17-21%.

Radeon owners do have some recourse, thanks to the slider in newer Catalyst drivers that allows the user to cap the tessellation factor used by games. Damien advises users to choose a limit of 16 or 32, well below the peak of 64.

As a publication that reviews GPUs, we have some recourse, as well. One of our options is to cap the tessellation factor on Radeon cards in future testing. Another is simply to skip Crysis 2 and focus on testing other games. Yet another is to exclude Crysis 2 results from our overall calculation of performance for our value scatter plots, as we’ve done with HAWX 2 in the past.
 
Last edited:
Joined
May 12, 2022
Messages
54 (0.06/day)
Control used lots of DLSS versions, only version 1.9 was not tensor based and only in Control. It also looked like complete crap. I was playing Control at the time, was not happy with DLSS 1.9.

As a person that really did play with DLSS 1.x, its was bad at the start. Then you would get an update and it was magic. FSR 1 was complete garbage and could not match DLSS 1.




You can tell the people who learned about DLSS 1 from propaganda and people who play games with DLSS 1. DLSS 1.0 was released on February 2019 with Metro Exodus and Battlefield V. So this massive image quality increase happened within a month of DLSS 1's release.

PhysX cards could do the same, its was not just nvidia cards. This is not cheating as there have been accelerator cards from the dawn of computer history. They too are not cheating. Afterall gpus are one type of accelerator cards. So by your argument we have all been cheating in Time Spy by installing a high end gpu and not running it all on the cpu. The only problem was AMD owners. Saying that I had two 7970's and two 290x gpus. This is the stuff that turned me off AMD. All the lies and reviews that cant be trusted.

DLSS 1 sucked. It's well documented and anyone can easily google what happened. Updates helped increase it's fidelity, but it still sucked. It was funny at the time, that in some scenarios that a simple bilinear upsample with CAS Sharpening looked better than DLSS 1. But that was mostly down to DLSS 1 just not really panning out. Metro got some special attention, so it's easily the best looking version of DLSS 1 we saw. But DLSS 2.x is when DLSS really came into it's own and has been what we always hoped it would be.

"PhysX cards could do the same, its was not just nvidia cards" Google what happen in 2008 with the 3DMark Vantage benchmark. Not sure your remembering what happened. It made sense to remove it.
 
Joined
Feb 20, 2022
Messages
175 (0.17/day)
System Name Custom Watercooled
Processor 10900k 5.1GHz SSE 5.0GHz AVX
Motherboard Asus Maximus XIII hero z590
Cooling XSPC Raystorm Pro, XSPC D5 Vario, EK Water Blocks EK-CoolStream XE 360 (Triple Fan) Radiator
Memory Team Group 8Pack RIPPED Edition TDPPD416G3600HC14CDC01 @ DDR4-4000 CL15 Dual Rank 4x8GB (32GB)
Video Card(s) KFA2 GeForce RTX 3080 Ti SG 1-Click OC 12GB LHR GDDR6X PCI-Express Graphics Card
Storage WD Blue SN550 1TB NVME M.2 2280 PCIe Gen3 Solid State Drive (WDS100T2B0C)
Display(s) LG 3D TV 32LW450U-ZB and Samsung U28D590
Case Full Tower Case
Audio Device(s) ROG SupremeFX 7.1 Surround Sound High Definition Audio CODEC ALC4082, ESS SABRE9018Q2C DAC/AMP
Power Supply Corsair AX1000 Titanium 80 Plus Titanium Power Supply
Mouse Logitech G502SE
Keyboard Logitech Y-BP62a
Software Windows 11 Pro
Benchmark Scores https://valid.x86.fr/2rdbdl https://www.3dmark.com/spy/27927340 https://ibb.co/YjQFw5t
DLSS 1 sucked. It's well documented and anyone can easily google what happened. Updates helped increase it's fidelity, but it still sucked. It was funny at the time, that in some scenarios that a simple bilinear upsample with CAS Sharpening looked better than DLSS 1. But that was mostly down to DLSS 1 just not really panning out. Metro got some special attention, so it's easily the best looking version of DLSS 1 we saw. But DLSS 2.x is when DLSS really came into it's own and has been what we always hoped it would be.

"PhysX cards could do the same, its was not just nvidia cards" Google what happen in 2008 with the 3DMark Vantage benchmark. Not sure your remembering what happened. It made sense to remove it.
Only in your head is that true, objectively it is not. I already posted evidence that DLSS 1 was not crap. Just because support for PhysX hardware acceleration was removed does not mean it would be right. AMD scores were lol at the time because of hardware acceleration. PhysX had hardware support from the start. Anyone could buy a PhysX card. NVidia cards got PhysX support. That was the truth of it and AMD scores suffered. Action was taken to protect AMD performance numbers from wide spread PhysX hardware support. A pattern that repeats right up to DXR and DLSS.

Cheating is what nvidia did in the gForce FX series of cards. Its in the video were they dropped colour depth to 16 bit to increase performance and then told no one about it.
 
Last edited:
Joined
May 12, 2022
Messages
54 (0.06/day)
I followed my own advice and googled it. Because reading your posts made me think I should look back. I was being too harsh(on DLSS 1)

The PhysX bit though... It had to be removed. You can't allow a vendor locked, closed sourced feature that is top to bottom controlled by a competing vendor be used as a comparison benchmark. nVidia modified the PhysX API to do something the Benchmark wasn't supposed to test for. The benchmark was designed for CPU and PPU. nVidia then changed the API to use CUDA acceleration on the GPU. That skewed the results and is exactly the sorta reason you can't allow it, it's an obvious conflict of interest.
 
Joined
Feb 20, 2022
Messages
175 (0.17/day)
System Name Custom Watercooled
Processor 10900k 5.1GHz SSE 5.0GHz AVX
Motherboard Asus Maximus XIII hero z590
Cooling XSPC Raystorm Pro, XSPC D5 Vario, EK Water Blocks EK-CoolStream XE 360 (Triple Fan) Radiator
Memory Team Group 8Pack RIPPED Edition TDPPD416G3600HC14CDC01 @ DDR4-4000 CL15 Dual Rank 4x8GB (32GB)
Video Card(s) KFA2 GeForce RTX 3080 Ti SG 1-Click OC 12GB LHR GDDR6X PCI-Express Graphics Card
Storage WD Blue SN550 1TB NVME M.2 2280 PCIe Gen3 Solid State Drive (WDS100T2B0C)
Display(s) LG 3D TV 32LW450U-ZB and Samsung U28D590
Case Full Tower Case
Audio Device(s) ROG SupremeFX 7.1 Surround Sound High Definition Audio CODEC ALC4082, ESS SABRE9018Q2C DAC/AMP
Power Supply Corsair AX1000 Titanium 80 Plus Titanium Power Supply
Mouse Logitech G502SE
Keyboard Logitech Y-BP62a
Software Windows 11 Pro
Benchmark Scores https://valid.x86.fr/2rdbdl https://www.3dmark.com/spy/27927340 https://ibb.co/YjQFw5t
I followed my own advice and googled it. Because reading your posts made me think I should look back. I was being too harsh, it wasn't great though.

The PhysX bit though... It had to be removed. You can't allow a vendor locked, closed sourced feature that is top to bottom controlled by a competing vendor be used as a comparison benchmark. nVidia modified the PhysX API to do something the Benchmark wasn't supposed to test for. The benchmark was designed for CPU and PPU. nVidia then changed the API to use CUDA acceleration on the GPU. That skewed the results and is exactly the sorta reason you can't allow it, it's an obvious conflict of interest.
The benchmark supported HW acceleration and PhysX cards were supported in CPU 2 only.

Since only the second CPU test in 3DMark Vantage from the PPU (Physics Processing Unit) can benefit, the performance increases logically only in this part. With the PPU a 34 percent higher computing power is achieved, whereby the CPU result increases from 16,139 points to 17,820 points. The overall result remains pretty unimpressed, however, as the CPU value only has a marginal influence on the overall score.
This was known but the problem was what happened next.
The Inquirer posted something up about driver cheating this week and that got the industry buzzing. The Inq claimed that NVIDIA was using in-house PhysX API’s and that they were able to manipulate the score in 3DMark Vantage since they can make the graphics card, drivers and physics API that is used for the benchmark. Our test scores showed the significant performance increase that everyone was up in arms about, but from what we could tell it was just off-loading the workload from the CPU to the GPU. The new NVIDIA drivers allow GPUs to do what once took a dedicated AGEIA PhysX card! The days of running a PPU and a GPU in a system are soon to be long gone!

NVIDIA is not using driver optimizations to cheat on benchmarks, they are just doing what someone with a PhysX card could do months ago. source
 
Last edited:

MxPhenom 216

ASIC Engineer
Joined
Aug 31, 2010
Messages
13,006 (2.50/day)
Location
Loveland, CO
System Name Ryzen Reflection
Processor AMD Ryzen 9 5900x
Motherboard Gigabyte X570S Aorus Master
Cooling 2x EK PE360 | TechN AM4 AMD Block Black | EK Quantum Vector Trinity GPU Nickel + Plexi
Memory Teamgroup T-Force Xtreem 2x16GB B-Die 3600 @ 14-14-14-28-42-288-2T 1.45v
Video Card(s) Zotac AMP HoloBlack RTX 3080Ti 12G | 950mV 1950Mhz
Storage WD SN850 500GB (OS) | Samsung 980 Pro 1TB (Games_1) | Samsung 970 Evo 1TB (Games_2)
Display(s) Asus XG27AQM 240Hz G-Sync Fast-IPS | Gigabyte M27Q-P 165Hz 1440P IPS | Asus 24" IPS (portrait mode)
Case Lian Li PC-011D XL | Custom cables by Cablemodz
Audio Device(s) FiiO K7 | Sennheiser HD650 + Beyerdynamic FOX Mic
Power Supply Seasonic Prime Ultra Platinum 850
Mouse Razer Viper v2 Pro
Keyboard Corsair K65 Plus 75% Wireless - USB Mode
Software Windows 11 Pro 64-Bit
RTX is just a marketing term for Ray Tracing. Nothing proprietary

Well now it is, and it was Nvidias way of naming Gpus to differentiate between rtx and gtx video cards. Rtx cards have RT core hardware gtx cards dont, but that was only really for Turing generation now that with ampere even low-mid end cards are all RTX.

Okey, so nvidia rtx is proprietary, sorry for missing the "x" in the end..

AMD said that you can get your ray-tracing only in the cloud. Good luck!

View attachment 247161
Whats your point? That might be the most honest press release slide I have ever seen from a corporation in this industry.

But make no mistakes, RDNA2 has RT hardware in it. First gen, and a different implementation from Nvidia and it appears to be not as good, but acting like RT is proprietary to nvidia is absurd and false.

We are long way off from a full natively ray traced scenes in video games, and if that requires cloud and/or AI to help so be it. Why the dig at AMD about that? Ray tracing will remain as only part of the rendering pipeline locally for quite a while I suspect.
 
Joined
Jun 2, 2017
Messages
9,201 (3.36/day)
System Name Best AMD Computer
Processor AMD 7900X3D
Motherboard Asus X670E E Strix
Cooling In Win SR36
Memory GSKILL DDR5 32GB 5200 30
Video Card(s) Sapphire Pulse 7900XT (Watercooled)
Storage Corsair MP 700, Seagate 530 2Tb, Adata SX8200 2TBx2, Kingston 2 TBx2, Micron 8 TB, WD AN 1500
Display(s) GIGABYTE FV43U
Case Corsair 7000D Airflow
Audio Device(s) Corsair Void Pro, Logitch Z523 5.1
Power Supply Deepcool 1000M
Mouse Logitech g7 gaming mouse
Keyboard Logitech G510
Software Windows 11 Pro 64 Steam. GOG, Uplay, Origin
Benchmark Scores Firestrike: 46183 Time Spy: 25121
Control used lots of DLSS versions, only version 1.9 was not tensor based and only in Control. It also looked like complete crap. I was playing Control at the time, was not happy with DLSS 1.9.

As a person that really did play with DLSS 1.x, its was bad at the start. Then you would get an update and it was magic. FSR 1 was complete garbage and could not match DLSS 1.




You can tell the people who learned about DLSS 1 from propaganda and people who play games with DLSS 1. DLSS 1.0 was released on February 2019 with Metro Exodus and Battlefield V. So this massive image quality increase happened within a month of DLSS 1's release.

PhysX cards could do the same, its was not just nvidia cards. This is not cheating as there have been accelerator cards from the dawn of computer history. They too are not cheating. Afterall gpus are one type of accelerator cards. So by your argument we have all been cheating in Time Spy by installing a high end gpu and not running it all on the cpu. The only problem was AMD owners. Saying that I had two 7970's and two 290x gpus. This is the stuff that turned me off AMD. All the lies and reviews that cant be trusted.

Too much tessellation and not AMD performance. source We all know it was AMD cards, I had two 7970's and two 290x's. The reason for it being so bad is in bold. That why they were going to make changes in the drivers and ban games from benchmarks.
First of all who created Physx and why was it favorable to Nvidia. As far as Crossfire support in those days it did bring a compelling improvement. It does not matter though because you are not convincing anyone with your revised edition of History.
 
Top