• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

Forspoken: FSR 3

Joined
Sep 27, 2017
Messages
43 (0.02/day)
System Name Fedora
Processor 5800X3D
Motherboard X370
Memory 32GB
Video Card(s) RX 6800
Matrix multiplication on GPUs was already available with CUDA or OpenCL before the Tensor Cores Marketing invention.
 
Joined
Jan 5, 2008
Messages
158 (0.03/day)
Processor Intel Core i7-975 @ 4.4 GHz
Motherboard ASUS Rampage III GENE
Cooling Noctua NH-D14
Memory 3x4 GB GeIL Enhance Plus 1750 MHz CL 9
Video Card(s) ASUS Radeon HD 7970 3 GB
Storage Samsung F2 500 GB
Display(s) Samsung SyncMaster 2243LNX
Case Antec Twelve Hundred V3
Audio Device(s) VIA VT2020
Power Supply Enermax Platimax 1000 W Special OC Edition
Software Microsoft Windows 7 Ultimate SP1
It's about the performance, not just possiblity of running certain stuff on the GPU. If DLSS processing takes too long per frame, performance gains will be smaller (or zero just like running XeSS on GPUs without DP4a).
 
Joined
Dec 29, 2020
Messages
229 (0.15/day)
While it is definit
It's about the performance, not just possiblity of running certain stuff on the GPU. If DLSS processing takes too long per frame, performance gains will be smaller (or zero just like running XeSS on GPUs without DP4a).

In the end it is probably similar to running other ai inferencing workloads. RDNA3 is competitive with nvidia here, for example in this stable diffusion benchmark: https://www.pugetsystems.com/labs/a...ion-performance-nvidia-geforce-vs-amd-radeon/

RDNA2 and older not so much but considering DLSS can run on amd hardware providing it uses the right instructions. Of course this would require a translation layer.

Honestly I really wish Intel as the smallest player would have made XeSS open source and implemented it so that nvidia turing and newer and rdna3 could run it.
 
Joined
Jan 19, 2023
Messages
409 (0.53/day)
While it is definit


In the end it is probably similar to running other ai inferencing workloads. RDNA3 is competitive with nvidia here, for example in this stable diffusion benchmark: https://www.pugetsystems.com/labs/a...ion-performance-nvidia-geforce-vs-amd-radeon/

RDNA2 and older not so much but considering DLSS can run on amd hardware providing it uses the right instructions. Of course this would require a translation layer.

Honestly I really wish Intel as the smallest player would have made XeSS open source and implemented it so that nvidia turing and newer and rdna3 could run it.
You can already run XESS, of course not so performant and not as good quality wise as on ARC with their XMX cores but you can.
TBH rather than Intel doing something, or nvidia doing something I would rather have AMD finally jump on the bandwagon and utilize their AI Accelerators (as it even says on my 7900XTX box) to improve FSR2.
Have a fallback layer to standard FSR2 for older RDNA cards but release a new version for RDNA3.
 
Joined
Mar 19, 2023
Messages
153 (0.22/day)
Location
Hyrule Castle, France
Processor Ryzen 5600x
Memory Crucial Ballistix
Video Card(s) RX 7900 XT
Storage SN850x
Display(s) Gigabyte M32U - LG UltraGear+ 4K 28"
Case Fractal Design Meshify C Mini
Power Supply Corsair RM650x (2021)
Well now will never known what was or not using DLSS on my RX 6800 since latest demo update removed it from game options.

Also not having "Tensor Marketing Cores" doesn't mean same math/matrix multiplications can't be run on any other GPUs. I wonder how the world did matrix multiplications for AI before Nvidia invented the Marketing Cores :>
In hindsight I'm pretty certain it was TSR.

However I agree that the Tensor Core bullshit has to stop. Sheeple will literally eat anything...
A Tensor is just a Matrix * Matrix operation. While it would have its uses, especially for neural/AI, it's not wizardry. Proof is that AMD doesn't have giant Matrix cores in its cards and has this kind of crappy but functional Raytracing technology that runs on TMUs and a software solution. It's bad, but it does work. The math checks out in the end, slower but accurate.

I have 100% confidence that the only reason the Tensor meme was thrown back in the Turing days is because Nvidia wanted to do the Nvidia thing and force everyone to upgrade from the 1000s and didn't want to have to say "we refuse to have DLSS run on the 1000s, pay up peasants".
They've always been like this and the sheeple have always obediently bleated at it, but the ever growing list of clues that this was a corporate policy and not a necessity at all should've at least made them react a bit. If anything, the fact that FSR 3 currently runs on a 3080 and DLSS 3 doesn't is another smoking gun, but at this point, estimating the level of Nvidia's fuckery with "bullshit reasons to force you to upgrade" is being in a smog of smoking guns. FSR 3, FSR 2, DLSS 3 FG on the 4000s but DLSS 3.5 is a-ok on the 3000s, magical "Tensor cores" that are so Tensor that if you released the source code to DLSS, AMD would be able to make it run on their cards + consoles in probably a matter of a few months...the list is long.

It would've been interesting to see if DLSS running on AMD had any real perf impact. I expect the perf would've been lower, but how much lower is the question, and I expect very little. Nvidia's justifications for gatekeeping everything have always been flimsy, and this would've been a great occasion to prove just how deep the bullshit runs.
Not that it would've changed much. The Sheeple would've found another excuse whispered straight from Nvidia's marketing on Reddit, the bullshit would be repeated until it became an accepted fact, and the caravan would've passed same as ever.
But as @theouto said, it would've been a fun weekend.

You can already run XESS, of course not so performant and not as good quality wise as on ARC with their XMX cores but you can.
TBH rather than Intel doing something, or nvidia doing something I would rather have AMD finally jump on the bandwagon and utilize their AI Accelerators (as it even says on my 7900XTX box) to improve FSR2.
Have a fallback layer to standard FSR2 for older RDNA cards but release a new version for RDNA3.
I agree that by now, either AMD pushes on for FSR 2.3 and keeps tryharding to get upscaling on a human-made algorithm again, or they just go for the solution literally everyone else has gone to and does some AI work with their upscaler.
I feel like FSR 1/2 were amazing for their time, but their "time" was the 500-6000/1000-3000 era. Every single GPU henceforth will run with some AI capability. It's great that a very decent solution was provided to upscale games on older hardware. But now, it's time to focus on the future and get some higher quality stuff. ML seems like the way to go to accelerate their productivity in bettering image quality.
 
Joined
Sep 27, 2017
Messages
43 (0.02/day)
System Name Fedora
Processor 5800X3D
Motherboard X370
Memory 32GB
Video Card(s) RX 6800
AMD doesn't have to push FSR 2.3 but make sure devs use the correct implementation of FSR. As we saw with Talos 2 Demo setting : r.FidelityFX.FSR2.ReactiveHistoryTranslucencyLumaBias=1 basically fixes particles and ghosting.

I'm 100% sure that all media , influencers , youtubers, etc will basically use the FSR particles and ghosting on Talos 2 to promote DLSS superiority if devs don't fix it till game release :>​

 
Joined
Jan 5, 2008
Messages
158 (0.03/day)
Processor Intel Core i7-975 @ 4.4 GHz
Motherboard ASUS Rampage III GENE
Cooling Noctua NH-D14
Memory 3x4 GB GeIL Enhance Plus 1750 MHz CL 9
Video Card(s) ASUS Radeon HD 7970 3 GB
Storage Samsung F2 500 GB
Display(s) Samsung SyncMaster 2243LNX
Case Antec Twelve Hundred V3
Audio Device(s) VIA VT2020
Power Supply Enermax Platimax 1000 W Special OC Edition
Software Microsoft Windows 7 Ultimate SP1
However I agree that the Tensor Core bullshit has to stop. Sheeple will literally eat anything...
A Tensor is just a Matrix * Matrix operation. While it would have its uses, especially for neural/AI, it's not wizardry. Proof is that AMD doesn't have giant Matrix cores in its cards and has this kind of crappy but functional Raytracing technology that runs on TMUs and a software solution. It's bad, but it does work. The math checks out in the end, slower but accurate.
Look at XeSS. With non-Arc graphics it uses a simplified neural network, yet it still runs slower than FSR 2 (significantly in some games). Now imagine running "full" XeSS (or DLSS) without matrix accelerators... good luck with it. NVIDIA of course is blocking DLSS on RDNA 3 and Xe GPUs even tho they have dedicated hardware to run it but it's their technology and they have no obligation to make it open.
 
Joined
Sep 27, 2017
Messages
43 (0.02/day)
System Name Fedora
Processor 5800X3D
Motherboard X370
Memory 32GB
Video Card(s) RX 6800
Digital Foundry surprisingly made a better job then that Hardware Unboxed poor joke coverage of FSR 3 FG:

 
Joined
Jan 19, 2023
Messages
409 (0.53/day)
Digital Foundry surprisingly made a better job then that Hardware Unboxed poor joke coverage of FSR 3 FG:

It really seems like devs just had a deadline and product manager decided to F it and release FSR3 to Immortals and Forspoken as nobody cares about those and also release the preview driver, so that nobody can say that they postponed release, while still working on the code, now that they have a little more peace of mind and a lot of beta testers.

On one hand that's good because we at least have something and we know that interpolation part is great, even more so when it does not require hardware acceleration, so they just need to figure out VRR, so that when frames are below Vsync they do not judder.
On the other hand I have a feeling that we are way off until we see implementation in bigger games.
 
Joined
Mar 19, 2023
Messages
153 (0.22/day)
Location
Hyrule Castle, France
Processor Ryzen 5600x
Memory Crucial Ballistix
Video Card(s) RX 7900 XT
Storage SN850x
Display(s) Gigabyte M32U - LG UltraGear+ 4K 28"
Case Fractal Design Meshify C Mini
Power Supply Corsair RM650x (2021)
Look at XeSS. With non-Arc graphics it uses a simplified neural network, yet it still runs slower than FSR 2 (significantly in some games). Now imagine running "full" XeSS (or DLSS) without matrix accelerators... good luck with it. NVIDIA of course is blocking DLSS on RDNA 3 and Xe GPUs even tho they have dedicated hardware to run it but it's their technology and they have no obligation to make it open.
One, that's a non-argument. FSR is the proof that you can relegate to a lesser amount of matrix ops.
Two, I'd like some numbers, not a very imprecise "slower".

As for the ultimate non-argument of "Nvidia has no obligation", yes they do. They have that obligation towards their buyers. Their buyers are sheeple that are ok with being fleeced. But if you told an AMD buyer "we coooooould do it for you, but we're under no obligation", there'd be revolt. In Nvidia land, every sheeple walks where they're told.
The fact that Nvidia has managed to create this cult mentality where nothing they do is criticizable and any lie that they throw should be accepted at face value doesn't change the fact that their customers pay to get what Nvidia provides. They don't pay to get baited into paying more. Or rather they do, but that's not a feature of capitalism, it's a bug.
 
Joined
Jan 5, 2008
Messages
158 (0.03/day)
Processor Intel Core i7-975 @ 4.4 GHz
Motherboard ASUS Rampage III GENE
Cooling Noctua NH-D14
Memory 3x4 GB GeIL Enhance Plus 1750 MHz CL 9
Video Card(s) ASUS Radeon HD 7970 3 GB
Storage Samsung F2 500 GB
Display(s) Samsung SyncMaster 2243LNX
Case Antec Twelve Hundred V3
Audio Device(s) VIA VT2020
Power Supply Enermax Platimax 1000 W Special OC Edition
Software Microsoft Windows 7 Ultimate SP1
FSR is the proof that you can relegate to a lesser amount of matrix ops.
If you don't care about image quality, then indeed you don't need to use AI at all :)
 
Joined
Sep 27, 2017
Messages
43 (0.02/day)
System Name Fedora
Processor 5800X3D
Motherboard X370
Memory 32GB
Video Card(s) RX 6800
What image quality are u talking about ? When i had an RX 580 i played majority of demanding games with FSR 1 enabled thru Proton GE and didn't have any problems at all. In fact it was night and day difference between slideshow and playable frames.And it looked much better then running the games at mega low settings aka better then native :>
 
Joined
Jan 5, 2008
Messages
158 (0.03/day)
Processor Intel Core i7-975 @ 4.4 GHz
Motherboard ASUS Rampage III GENE
Cooling Noctua NH-D14
Memory 3x4 GB GeIL Enhance Plus 1750 MHz CL 9
Video Card(s) ASUS Radeon HD 7970 3 GB
Storage Samsung F2 500 GB
Display(s) Samsung SyncMaster 2243LNX
Case Antec Twelve Hundred V3
Audio Device(s) VIA VT2020
Power Supply Enermax Platimax 1000 W Special OC Edition
Software Microsoft Windows 7 Ultimate SP1
So for you even first gen FSR was ok in terms of image quality? All right...
 
Joined
Dec 12, 2012
Messages
812 (0.18/day)
Location
Poland
System Name THU
Processor Intel Core i5-13600KF
Motherboard ASUS PRIME Z790-P D4
Cooling SilentiumPC Fortis 3 v2 + Arctic Cooling MX-2
Memory Crucial Ballistix 2x16 GB DDR4-3600 CL16 (dual rank)
Video Card(s) MSI GeForce RTX 4070 Ventus 3X OC 12 GB GDDR6X (2610/21000 @ 0.91 V)
Storage Lexar NM790 2 TB + Corsair MP510 960 GB + PNY XLR8 CS3030 500 GB + Toshiba E300 3 TB
Display(s) LG OLED C8 55" + ASUS VP229Q
Case Fractal Design Define R6
Audio Device(s) Yamaha RX-V381 + Monitor Audio Bronze 6 + Bronze FX | FiiO E10K-TC + Sony MDR-7506
Power Supply Corsair RM650
Mouse Logitech M705 Marathon
Keyboard Corsair K55 RGB PRO
Software Windows 10 Home
Benchmark Scores Benchmarks in 2024?
So for you even first gen FSR was ok in terms of image quality? All right...

Technically it was better in some aspects. While it resolved less detail, it didn't add instability or ghosting, as it was just a spatial upscaler.

I used FSR1 Ultra Quality in 4K in Far Cry 6 and it looked great on a TV.
 

wolf

Better Than Native
Joined
May 7, 2007
Messages
8,491 (1.31/day)
System Name MightyX
Processor Ryzen 9800X3D
Motherboard Gigabyte B650I AX
Cooling Scythe Fuma 2
Memory 32GB DDR5 6000 CL30 tuned
Video Card(s) Palit Gamerock RTX 5080 oc
Storage WD Black SN850X 2TB
Display(s) LG 42C2 4K OLED
Case Coolermaster NR200P
Audio Device(s) LG SN5Y / Focal Clear
Power Supply Corsair SF750 Platinum
Mouse Corsair Dark Core RBG Pro SE
Keyboard Glorious GMMK Compact w/pudding
VR HMD Meta Quest 3
Software case populated with Artic P12's
Benchmark Scores 4k120 OLED Gsync bliss
If you don't care about image quality, then indeed you don't need to use AI at all :)
Pretty much, I'll give it time to improve but based on history, I won't be expecting miracles IQ wise, but if they can iron out the bugs and get it in games everyone wins. It's interesting to see so many people say "see, AMD did it so this is proof nvidia didn't need to lock it to 40 series", without understanding how it might have looked, performed and the latency. Clearly they'd been working with the idea for a long time and for best results, it needed faster hardware.

Also the irony of the chap that responded claiming you're the one talking bs when going on massive rants about nvidia sheeple.. Wasn't lost on me.

As it turns out, leveraging the right hardware clearly gets the best results, and while open tech is absolutely great for everyone else, it's easy to see why some solutions aren't made open. Imo Intel half shot the self in the foot with XeSS, and all codepaths being simply called XeSS but with vastly different performance and IQ, I can easily see why nvidia wouldn't want that for their brand/product reputation.
 
Joined
Sep 27, 2017
Messages
43 (0.02/day)
System Name Fedora
Processor 5800X3D
Motherboard X370
Memory 32GB
Video Card(s) RX 6800
Let's get straight to the point : game devs got 3 years at their disposal to get familiar with DLSS API before FSR lauched. As we saw with Talos 2 Demo they still don't use the correct settings for FSR and that translate in lower graphic quality , ghosting and other problems that contribute to the general public biased opinion that FSR is somehow inferior to other upscalers.

The fact that it can run on all vendors without requiring "dedicated hardware" aka wasted silicone space makes it superior to DLSS, the open-source side of things means that anyone can see/use/modify and contribute code to fix problems and improve it - the last part didn't took off yet.

I buy the hardware that fit my needs - in my case AMD all the way (Nvidia is really a sad joke on Linux) - not what random nobody's preach on the internet. And i can also use AI inside containers to not fill my system with trash :p

StableDiffusion.png
 
Joined
Dec 29, 2020
Messages
229 (0.15/day)
The Talos demo was just that an early demo, XeSS and DLSS were not working in the first place, another thing to consider is developers are generally lazy, if the defaults are bad, that is on AMD / the engine maker also. Even a well implemented FSR2 is inferior to DLSS, and DLSS is more tolerant to lazy implementations.
As for the required hardware, it was more the other way around, Nvidia had these tensor cores included in their architecture and looked for a use for them in gaming. This is why they developed DLSS in the first place. RDNA 3 has similar Acceleration but uses its regular GPU cores for it. Intel has similar accelerators. So all parties deem these useful to have, so having wasted silicon space for DLSS is simply false.

Don't get me wrong, I much prefer an open vendor-independant solution, but I also agree DLSS offers a superior quality even when FSR 2 is well implemented and is generally easier to implement.
DLSS mod for Starfield for example offers better image quality without any integration done by the original developers.
 

BicycleBicycle

New Member
Joined
Oct 6, 2023
Messages
11 (0.02/day)
Pretty much, I'll give it time to improve but based on history, I won't be expecting miracles IQ wise, but if they can iron out the bugs and get it in games everyone wins. It's interesting to see so many people say "see, AMD did it so this is proof nvidia didn't need to lock it to 40 series", without understanding how it might have looked, performed and the latency. Clearly they'd been working with the idea for a long time and for best results, it needed faster hardware.

Also the irony of the chap that responded claiming you're the one talking bs when going on massive rants about nvidia sheeple.. Wasn't lost on me.

As it turns out, leveraging the right hardware clearly gets the best results, and while open tech is absolutely great for everyone else, it's easy to see why some solutions aren't made open. Imo Intel half shot the self in the foot with XeSS, and all codepaths being simply called XeSS but with vastly different performance and IQ, I can easily see why nvidia wouldn't want that for their brand/product reputation.
Based on early results of AFMF, 7000 series cards perform better while some 6000 series has issues especially in regards to artifacts and ghosting. Results seem to vary greatly and diminish quickly as you move towards the more budget 6000 series offerings. That goes back to your point of what the expectations are and what a the company expects to provide as an experience. If the experience that they want to provide is not acceptable and devalues the branding of the product, then it's likely not going to happen. I.E Apple does the same, other companies do it as well.

Just because you can, doesn't always mean you should.

Of course if the objective is to simply provide the tech and don't really care about what you want the user experience to be, nor if you care that the tech may perform less than adequately on certain hardware, then go for it. Which to me seems to be the case here. Hopefully future updates will mitigate some of the issues but as it sits, there's a reason AMD initially intended to release AFMF for 7000 series. It just works better with that hardware. There's already people with 6600/6700 cards saying that AFMF is garbage, and that's mostly because their hardware is not meeting their expectations and thus impacting the branding negatively.
 
Joined
Jan 5, 2008
Messages
158 (0.03/day)
Processor Intel Core i7-975 @ 4.4 GHz
Motherboard ASUS Rampage III GENE
Cooling Noctua NH-D14
Memory 3x4 GB GeIL Enhance Plus 1750 MHz CL 9
Video Card(s) ASUS Radeon HD 7970 3 GB
Storage Samsung F2 500 GB
Display(s) Samsung SyncMaster 2243LNX
Case Antec Twelve Hundred V3
Audio Device(s) VIA VT2020
Power Supply Enermax Platimax 1000 W Special OC Edition
Software Microsoft Windows 7 Ultimate SP1
The fact that it can run on all vendors without requiring "dedicated hardware" aka wasted silicone space makes it superior to DLSS, the open-source side of things means that anyone can see/use/modify and contribute code to fix problems and improve it - the last part didn't took off yet.
You really think AI accelerators were introduced specifically to support image scaling in games? Lol, that's totally the other way around.
I buy the hardware that fit my needs - in my case AMD all the way (Nvidia is really a sad joke on Linux) - not what random nobody's preach on the internet. And i can also use AI inside containers to not fill my system with trash :p
Good for you, we're happy you're satisfied with your purchase :)
general public biased opinion that FSR is somehow inferior to other upscalers
It's not a biased opinion, it's a simple fact that DLSS is a superior scaler in terms of image quality. Some games have implementation issues indeed but that doesn't make FSR 2 an equal tech. You got plenty of evidence it's otherwise, for example comparisons here at TechPowerUp, videos at Hardware Unboxed and many more. Are you really trying to claim that in every single game it's because FSR wasn't implemented correctly? Nice shillin' mate.
 
Joined
Sep 27, 2017
Messages
43 (0.02/day)
System Name Fedora
Processor 5800X3D
Motherboard X370
Memory 32GB
Video Card(s) RX 6800
I still don't understand the DLSS preaching when i don't own an Nvidia GPU to form my own opinion. When Nvidia GPUs will fully work on Mesa drivers maybe i will consider their offering when i spend my money on a new GPU. And nope their garbage NVK - Nvidia Vukan open-source driver doesn't have a chance, but who knows DLSS might help it , if only it wasn't locked to their proprietary drivers :>

NvidiaOpenSourceDriver.png


Don't bother boring me with the double-standard FUD that it's new and can improve, when the same reasoning doesn't apply to FSR 3 FG for example.
 
Joined
Dec 29, 2020
Messages
229 (0.15/day)
I still don't understand the DLSS preaching when i don't own an Nvidia GPU to form my own opinion. When Nvidia GPUs will fully work on Mesa drivers maybe i will consider their offering when i spend my money on a new GPU. And nope their garbage NVK - Nvidia Vukan open-source driver doesn't have a chance, but who knows DLSS might help it , if only it wasn't locked to their proprietary drivers :>

View attachment 316556

Don't bother boring me with the double-standard FUD that it's new and can improve, when the same reasoning doesn't apply to FSR 3 FG for example.
The matter of DLSS being superior is just one factor going for Nvidia, that does not mean you have to use Nvidia because they have the better Upscaler.
Nvidia's graphics drivers being quite terrible to work with on Linux is indeed a thing (also the reason I switched to AMD on my latest GPU).

However, you should still recognize area's of weakness. I personally really do not like the fact that Nvidia builds this type of vendor-locked tech. Still, that does not mean the end-result is not better.
 
Joined
Jan 5, 2008
Messages
158 (0.03/day)
Processor Intel Core i7-975 @ 4.4 GHz
Motherboard ASUS Rampage III GENE
Cooling Noctua NH-D14
Memory 3x4 GB GeIL Enhance Plus 1750 MHz CL 9
Video Card(s) ASUS Radeon HD 7970 3 GB
Storage Samsung F2 500 GB
Display(s) Samsung SyncMaster 2243LNX
Case Antec Twelve Hundred V3
Audio Device(s) VIA VT2020
Power Supply Enermax Platimax 1000 W Special OC Edition
Software Microsoft Windows 7 Ultimate SP1
When Nvidia GPUs will fully work on Mesa drivers maybe i will consider their offering when i spend my money on a new GPU. And nope their garbage NVK - Nvidia Vukan open-source driver doesn't have a chance, but who knows DLSS might help it , if only it wasn't locked to their proprietary drivers :>
Not a Linux user myself so let me ask you this question - what exactly is a problem with just using NVIDIA's proprietary drivers? That they're closed source?
 
Joined
Feb 18, 2020
Messages
14 (0.01/day)
Location
Latvia
In theory it may be possible to translate the calls to something AMD gpu's understand, yes tensor cores are used for DLSS, however the same type of calculations can also be done on gpu cores.

That being said the prospect of someone writing a translation layer are slim and the performance doing so may also not be great as tensor cores are more efficient at it. But if you take stable diffusion as a benchmark 7900xtx can keep up with a 3090ti.
DLSS originally was written for CUDA, and Control had one of the CUDA builds running with an approximation algorithm, obviously I don't know how it's working underneath, but it might be possible that you don't actually need neither tensor nor CUDA cores to run the algorithm, but from what I've seen it's most likely requiring at the very least CUDA architecture to run.
If this had been true and DLSS had run on a radeon, it would had made sense and also meant that it's written for general purpose computing hardware.
 
Joined
Sep 27, 2017
Messages
43 (0.02/day)
System Name Fedora
Processor 5800X3D
Motherboard X370
Memory 32GB
Video Card(s) RX 6800
Bet u didn't know that CUDA cores are actually Compute Shaders with a fancy marketing name :>
 
Top