• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

Next-Gen Radeon "Polaris" Nomenclature Changed?

FordGT90Concept

"I go fast!1!11!1!"
Joined
Oct 13, 2008
Messages
26,263 (4.42/day)
Location
IA, USA
System Name BY-2021
Processor AMD Ryzen 7 5800X (65w eco profile)
Motherboard MSI B550 Gaming Plus
Cooling Scythe Mugen (rev 5)
Memory 2 x Kingston HyperX DDR4-3200 32 GiB
Video Card(s) AMD Radeon RX 7900 XT
Storage Samsung 980 Pro, Seagate Exos X20 TB 7200 RPM
Display(s) Nixeus NX-EDG274K (3840x2160@144 DP) + Samsung SyncMaster 906BW (1440x900@60 HDMI-DVI)
Case Coolermaster HAF 932 w/ USB 3.0 5.25" bay + USB 3.2 (A+C) 3.5" bay
Audio Device(s) Realtek ALC1150, Micca OriGen+
Power Supply Enermax Platimax 850w
Mouse Nixeus REVEL-X
Keyboard Tesoro Excalibur
Software Windows 10 Home 64-bit
Benchmark Scores Faster than the tortoise; slower than the hare.
Sounds reasonable (and well in line with Raja's talk about how important mulit-gpu is and how AMD is going to educate developers).

And yikes. Risky strategy to say the least.
It's not really risky. D3D12/Vulkan will roll out to developers. PS4K and Xbox One.One (ha!) are also likely to have a dual GPU solution which will be ported to their desktop brethren. In the meantime, you'll still get a ~40% boost from having the second GPU through Crossfire (should go >50% on games that implement it themselves).

Voodoo 5 5500 strikes again, and this time, to stay! :D
 
Joined
Jul 9, 2015
Messages
3,413 (0.98/day)
System Name M3401 notebook
Processor 5600H
Motherboard NA
Memory 16GB
Video Card(s) 3050
Storage 500GB SSD
Display(s) 14" OLED screen of the laptop
Software Windows 10
Benchmark Scores 3050 scores good 15-20% lower than average, despite ASUS's claims that it has uber cooling.
In the meantime, you'll still get a ~40% boost from having the second GPU through Crossfire.

The strategy makes sense on paper, I admit.
Smaller chips are easier to design.
Consoles are in AMD's pocket.
Yadayada master plan.

But then, the only config faster than 1070 will be dual chip.
Many users avoid such configs, because at least at the moment they do give trouble.
Not only could you NOT get any boost from second GPU chip, but it might crash on you altogether.

If you price it higher than 1070, people won't buy it.
Also: 232mm2 + 232mm2 = 464mm2

And with single chip being 115-140w, dual chip will be somewhere on 170-210w levels, so more than 1080.

PS
Heck, 295x wipes the floor with TitaniumX/980Ti OC in so many games...
 
Last edited:

FordGT90Concept

"I go fast!1!11!1!"
Joined
Oct 13, 2008
Messages
26,263 (4.42/day)
Location
IA, USA
System Name BY-2021
Processor AMD Ryzen 7 5800X (65w eco profile)
Motherboard MSI B550 Gaming Plus
Cooling Scythe Mugen (rev 5)
Memory 2 x Kingston HyperX DDR4-3200 32 GiB
Video Card(s) AMD Radeon RX 7900 XT
Storage Samsung 980 Pro, Seagate Exos X20 TB 7200 RPM
Display(s) Nixeus NX-EDG274K (3840x2160@144 DP) + Samsung SyncMaster 906BW (1440x900@60 HDMI-DVI)
Case Coolermaster HAF 932 w/ USB 3.0 5.25" bay + USB 3.2 (A+C) 3.5" bay
Audio Device(s) Realtek ALC1150, Micca OriGen+
Power Supply Enermax Platimax 850w
Mouse Nixeus REVEL-X
Keyboard Tesoro Excalibur
Software Windows 10 Home 64-bit
Benchmark Scores Faster than the tortoise; slower than the hare.
Vega is the answer to GTX 1080, not Polaris. The fact Polaris is likely to be about equal to GTX 1070 is in AMD's favor.

As multi-GPU code in games takes over, Crossfire/SLI becomes less important. That should translate to fewer crashes.

Yeah, power will be more but that is no surprise. GCN is capable of async multithreading and that comes at a cost. GPUs can't power down parts like CPUs can because their pipelines are much more complex.

295X2 launched at $1500. RX 480 I could see launching at $600-700 which puts it squarely in the same price bracket as GTX 1080.
 
Joined
Sep 17, 2014
Messages
22,933 (6.07/day)
Location
The Washing Machine
System Name Tiny the White Yeti
Processor 7800X3D
Motherboard MSI MAG Mortar b650m wifi
Cooling CPU: Thermalright Peerless Assassin / Case: Phanteks T30-120 x3
Memory 32GB Corsair Vengeance 30CL6000
Video Card(s) ASRock RX7900XT Phantom Gaming
Storage Lexar NM790 4TB + Samsung 850 EVO 1TB + Samsung 980 1TB + Crucial BX100 250GB
Display(s) Gigabyte G34QWC (3440x1440)
Case Lian Li A3 mATX White
Audio Device(s) Harman Kardon AVR137 + 2.1
Power Supply EVGA Supernova G2 750W
Mouse Steelseries Aerox 5
Keyboard Lenovo Thinkpad Trackpoint II
VR HMD HD 420 - Green Edition ;)
Software W11 IoT Enterprise LTSC
Benchmark Scores Over 9000
The strategy makes sense on paper, I admit.
Smaller chips are easier to design.
Consoles are in AMD's pocket.
Yadayada master plan.

But then, the only config faster than 1070 will be dual chip.
Many users avoid such configs, because at least at the moment they do give trouble.
Not only could you NOT get any boost from second GPU chip, but it might crash on you altogether.

If you price it higher than 1070, people won't buy it.
Also: 232mm2 + 232mm2 = 464mm2

And with single chip being 115-140w, dual chip will be somewhere on 170-210w levels, so more than 1080.

PS
Heck, 295x wipes the floor with TitaniumX/980Ti OC in so many games...

You know, I actually see a repeat of past mistakes if they invest heavy on dual GPU as a baseline for performance scaling. Sure DX12 and API + console development screams for it. But...

Haven't we seen this fail miserably already with the FX processor line? It is the same, two = one for more performance and 'more cores' approach they've adopted so many times. I really do hope AMD coincides the dual GPU solution with some very big steps in the GCN arch as well, most notably with regards to perf/watt and efficiency or this will fail. So far it's looking promising, but we've also been thére before.

FWIW, I really do hope AMD surprises us in a good way, and makes choosing for them this time around a true no-brainer. They need it and with Nvidia releasing 1080 at this price point, the market evidently also needs it.
 
Joined
Jul 9, 2015
Messages
3,413 (0.98/day)
System Name M3401 notebook
Processor 5600H
Motherboard NA
Memory 16GB
Video Card(s) 3050
Storage 500GB SSD
Display(s) 14" OLED screen of the laptop
Software Windows 10
Benchmark Scores 3050 scores good 15-20% lower than average, despite ASUS's claims that it has uber cooling.
You know, I actually see a repeat of past mistakes if they invest heavy on dual GPU as a baseline for performance scaling. Sure DX12 and API + console development screams for it. But...

Haven't we seen this fail miserably already with the FX processor line? I really do hope AMD coincides the dual GPU solution with some very big steps in the GCN arch as well, most notably with regards to perf/watt and efficiency or this will fail. So far it's looking promising, but we've also been thére before.

The problem is, AMD's 7000k employees out of 8000k are engineers. (can't get much lower non-engineer force than that)
They R&D in both GPUs and CPUs, yet their budget is smaller than nVidia's alone (and like 10 times smaller than Intel's).

And then there is that "smaller chips are cheaper to design" thing.

This time they have API support and consoles (nothing like that was going for them in Buldozer times), so let's see how it goes. I'll keep my fingers crossed. =/
 

Frick

Fishfaced Nincompoop
Joined
Feb 27, 2006
Messages
19,750 (2.86/day)
Location
w
System Name Black MC in Tokyo
Processor Ryzen 5 7600
Motherboard MSI X670E Gaming Plus Wifi
Cooling Be Quiet! Pure Rock 2
Memory 2 x 16GB Corsair Vengeance @ 6000Mhz
Video Card(s) XFX 6950XT Speedster MERC 319
Storage Kingston KC3000 1TB | WD Black SN750 2TB |WD Blue 1TB x 2 | Toshiba P300 2TB | Seagate Expansion 8TB
Display(s) Samsung U32J590U 4K + BenQ GL2450HT 1080p
Case Fractal Design Define R4
Audio Device(s) Plantronics 5220, Nektar SE61 keyboard
Power Supply Corsair RM850x v3
Mouse Logitech G602
Keyboard Dell SK3205
Software Windows 10 Pro
Benchmark Scores Rimworld 4K ready!
You know, I actually see a repeat of past mistakes if they invest heavy on dual GPU as a baseline for performance scaling. Sure DX12 and API + console development screams for it. But...

Haven't we seen this fail miserably already with the FX processor line? I really do hope AMD coincides the dual GPU solution with some very big steps in the GCN arch as well, most notably with regards to perf/watt and efficiency or this will fail. So far it's looking promising, but we've also been thére before.

FWIW, I really do hope AMD surprises us in a good way, and makes choosing for them this time around a true no-brainer. They need it.

Things are a bit different now though. Pure grunt is still king but that will only get you so far. It might be a bit early though, if that is the direction they're going. Bulldozer was way too early.
 

FordGT90Concept

"I go fast!1!11!1!"
Joined
Oct 13, 2008
Messages
26,263 (4.42/day)
Location
IA, USA
System Name BY-2021
Processor AMD Ryzen 7 5800X (65w eco profile)
Motherboard MSI B550 Gaming Plus
Cooling Scythe Mugen (rev 5)
Memory 2 x Kingston HyperX DDR4-3200 32 GiB
Video Card(s) AMD Radeon RX 7900 XT
Storage Samsung 980 Pro, Seagate Exos X20 TB 7200 RPM
Display(s) Nixeus NX-EDG274K (3840x2160@144 DP) + Samsung SyncMaster 906BW (1440x900@60 HDMI-DVI)
Case Coolermaster HAF 932 w/ USB 3.0 5.25" bay + USB 3.2 (A+C) 3.5" bay
Audio Device(s) Realtek ALC1150, Micca OriGen+
Power Supply Enermax Platimax 850w
Mouse Nixeus REVEL-X
Keyboard Tesoro Excalibur
Software Windows 10 Home 64-bit
Benchmark Scores Faster than the tortoise; slower than the hare.
Bulldozer was just a terrible architecture all around. It was made to be different only for the sake of being different. There was nothing rational about its design.

The better comparison is Athlon 64 FX-60: a dual-core CPU when single-core CPUs were all the rage. Just 10 years later and budget CPUs are dual-cores.

If the speculation is correct that RX 480 is a dual-GPU Polaris 10, it could be the equivalent of an Athlon 64 X2--the affordable alternative to the FX-60; the series of chips that slowly but surely conquered the market until Core 2 Duo debuted. Before this, all that was available on the market were FX-60s (the biggest, baddest GPUs with a price to match).
 
Joined
Nov 18, 2010
Messages
7,607 (1.47/day)
Location
Rīga, Latvia
System Name HELLSTAR
Processor AMD RYZEN 9 5950X
Motherboard ASUS Strix X570-E
Cooling 2x 360 + 280 rads. 3x Gentle Typhoons, 3x Phanteks T30, 2x TT T140 . EK-Quantum Momentum Monoblock.
Memory 4x8GB G.SKILL Trident Z RGB F4-4133C19D-16GTZR 14-16-12-30-44
Video Card(s) Sapphire Pulse RX 7900XTX. Water block. Crossflashed.
Storage Optane 900P[Fedora] + WD BLACK SN850X 4TB + 750 EVO 500GB + 1TB 980PRO+SN560 1TB(W11)
Display(s) Philips PHL BDM3270 + Acer XV242Y
Case Lian Li O11 Dynamic EVO
Audio Device(s) SMSL RAW-MDA1 DAC
Power Supply Fractal Design Newton R3 1000W
Mouse Razer Basilisk
Keyboard Razer BlackWidow V3 - Yellow Switch
Software FEDORA 41
First of all, GLSL is a shading language and is itself a separate spec from OpenGL

In OpenGL almost everything is a fragmented vendor specific mess. OpenGL 4.5 actually supports almost everything that SPIR-V does, well it should. Nvidia actually will release a cross compiler in between GLSL and SPIRV (screwing everything up for AMD again for everyone that will use it).

And normal devs are not usually lazy, they are not plain stupid breaking things and adding unneeded time and money costs. Vulkan in current state is still more a PR gimmick. I cannot see anyone writing in pure C# their engine and engine creation tools from scratch. UE4 and others are still in development. It consists of functions rooted in openGL. Same functions doing same, just a compiler that translates to SPIRV and fed to the vulkan driver to render the scene. OPENGL doesn't disappear anywhere.
 
Joined
Jun 22, 2015
Messages
203 (0.06/day)
Wrong. As long it uses GLSL it will use opengl as a core in application part. I haven't seen a native Vulkan engine. All Vulkan enabled games we have are vulkan ports with actually less efficiency as it actually should be. It is actually as always a code mess in reality than the all adverts tell you. ID won't recode all of their engine just in a flash, economically they won't even bother to do it.
what is spir-v? no company worth their weight in salt will continue to ship glsl stuff anymore (shader source shipped to clients, client side compilation, spir-v is compiled beforehand, no ip loss, yada yada).
you can write a translation layer between vulkan and a ogl application (has been done, cant find links) and that alone _will_ improve performance, no one in their right mind will call that a vulkan enabled application but still it works.

have you seen and coded all the game engines available that use vulkan? have you even coded anything that uses vulkan/ogl/olges for that matter? why are you trying to pass your half assed assumptions as truths? gosh...

In OpenGL almost everything is a fragmented vendor specific mess. OpenGL 4.5 actually supports almost everything that SPIR-V does, well it should. Nvidia actually will release a cross compiler in between GLSL and SPIRV (screwing everything up for AMD again for everyone that will use it).

And normal devs are not usually lazy, they are not plain stupid breaking things and adding unneeded time and money costs. Vulkan in current state is still more a PR gimmick. I cannot see anyone writing in pure C# their engine and engine creation tools from scratch. UE4 and others are still in development. It consists of functions rooted in openGL. Same functions doing same, just a compiler that translates to SPIRV and fed to the vulkan driver to render the scene. OPENGL doesn't disappear anywhere.
ok no need to answer anything, i can see what the response would be already, you cant even distinguish between the standard and vendor specific extensions (and those who chose to use them know damn well what happens)...
 
Last edited:
Joined
Dec 28, 2012
Messages
4,067 (0.92/day)
System Name Skunkworks 3.0
Processor 5800x3d
Motherboard x570 unify
Cooling Noctua NH-U12A
Memory 32GB 3600 mhz
Video Card(s) asrock 6800xt challenger D
Storage Sabarent rocket 4.0 2TB, MX 500 2TB
Display(s) Asus 1440p144 27"
Case Old arse cooler master 932
Power Supply Corsair 1200w platinum
Mouse *squeak*
Keyboard Some old office thing
Software Manjaro
Let's do some math!

RX 480
2560 SPs
unknown clock

R9 390
2560 SPs
1000 MHz

R9 390 got 57.4 fps so lets scale it:

1.0 GHz = 57.4 fps
1.1 GHz = 63.1 fps
1.2 GHz = 68.9 fps
1.3 GHz = 74.6 fps
1.4 GHz = 80.4 fps
1.5 GHz = 86.1 fps
1.6 GHz = 91.8 fps
1.7 GHz = 97.6 fps (beats GTX 1080)

I doubt it will be clocked higher than 1.5 GHz (1.1-1.2 GHz is the most realistic). Unless AMD made some massive improvements to OpenGL rendering, the RX 480 isn't likely to top GTX 1080.
Your math assumes that 1 polaris SP=1 hawaii SP, and that amd made absolutly zero improvements to IPC, color/texture compression, ece.
 

FordGT90Concept

"I go fast!1!11!1!"
Joined
Oct 13, 2008
Messages
26,263 (4.42/day)
Location
IA, USA
System Name BY-2021
Processor AMD Ryzen 7 5800X (65w eco profile)
Motherboard MSI B550 Gaming Plus
Cooling Scythe Mugen (rev 5)
Memory 2 x Kingston HyperX DDR4-3200 32 GiB
Video Card(s) AMD Radeon RX 7900 XT
Storage Samsung 980 Pro, Seagate Exos X20 TB 7200 RPM
Display(s) Nixeus NX-EDG274K (3840x2160@144 DP) + Samsung SyncMaster 906BW (1440x900@60 HDMI-DVI)
Case Coolermaster HAF 932 w/ USB 3.0 5.25" bay + USB 3.2 (A+C) 3.5" bay
Audio Device(s) Realtek ALC1150, Micca OriGen+
Power Supply Enermax Platimax 850w
Mouse Nixeus REVEL-X
Keyboard Tesoro Excalibur
Software Windows 10 Home 64-bit
Benchmark Scores Faster than the tortoise; slower than the hare.
There's some improvement to Polaris' architecture but I doubt it will account for much. Polaris is mostly about the process tech change which impacts power consumption, transistor count, and clockspeeds. AMD is focusing on low power consumption, the transistor count isn't changing much (actually fewer SPs from 390X), and AMD hasn't given us any indication that the clockspeeds are changing much either (predominantly to keep the power consumption low). The data we do have strongly suggests Fury-like performance for under 150w. It should be close to the GTX 1070 in virtually every way.
 
Joined
Nov 18, 2010
Messages
7,607 (1.47/day)
Location
Rīga, Latvia
System Name HELLSTAR
Processor AMD RYZEN 9 5950X
Motherboard ASUS Strix X570-E
Cooling 2x 360 + 280 rads. 3x Gentle Typhoons, 3x Phanteks T30, 2x TT T140 . EK-Quantum Momentum Monoblock.
Memory 4x8GB G.SKILL Trident Z RGB F4-4133C19D-16GTZR 14-16-12-30-44
Video Card(s) Sapphire Pulse RX 7900XTX. Water block. Crossflashed.
Storage Optane 900P[Fedora] + WD BLACK SN850X 4TB + 750 EVO 500GB + 1TB 980PRO+SN560 1TB(W11)
Display(s) Philips PHL BDM3270 + Acer XV242Y
Case Lian Li O11 Dynamic EVO
Audio Device(s) SMSL RAW-MDA1 DAC
Power Supply Fractal Design Newton R3 1000W
Mouse Razer Basilisk
Keyboard Razer BlackWidow V3 - Yellow Switch
Software FEDORA 41
(and those who chose to use them know damn well what happens)...

You shout like gameworks never happened? Devs will use any help and if a GPU vendor gives it they will use it and nobody gives a crap about that in dev HQ's. As I said speed and function wise OpenGL 4.5 is almost same as Vulkan actually. If you cannot bind two exact functions with different names, that ain't my problem. OpenGL will live further as any higher level language api using GLSL and will be the defacto for smaller indie projects, like the AAA OpenGL projects now are too (wolfenstein and doom). Just as even consoles use two types of SDK utilizing higher level more easy coding and more close to the metal. OpenGL and GLSL won't go nowhere. Development time costs a lot of money.

VULKAN doesn't guarantee performance improvement at all. Especially in GPU bound games, and Doom can run on a P4 coffee machine as proven in here in forums, on contrary it may introduce less performance, especially at high resolutions. It's same as with Mantle it was... actually it is just MantleGL. Same problems and average resume. More stutter, no difference with any reasonable i5 (sadly it means faster than any AMD CPU up to date now).

Look at first try with Talos principle, it ran worse on Vulkan(sure sure blame the dev). Look at Dota2 update, @github people are reporting actually lower performance. Stutters (obviously due to buggy shader code when casting magic). I won't expect any magic from Doom too.

Too much PR bullcrap IMHO. It is all raw technology and shiny term as RGB LED thingies are being packed everywhere. Just because it needs to be so ffs.
 
Joined
Jul 23, 2011
Messages
1,586 (0.32/day)
Location
Kaunas, Lithuania
System Name my box
Processor AMD Ryzen 9 5950X
Motherboard ASRock Taichi x470 Ultimate
Cooling NZXT Kraken x72
Memory 2×16GiB @ 3200MHz, some Corsair RGB led meme crap
Video Card(s) AMD [ASUS ROG STRIX] Radeon RX Vega64 [OC Edition]
Storage Samsung 970 Pro && 2× Seagate IronWolf Pro 4TB in Raid 1
Display(s) Asus VG278H + Asus VH226H
Case Fractal Design Define R6 Black TG
Audio Device(s) Using optical S/PDIF output lol
Power Supply Corsair AX1200i
Mouse Razer Naga Epic
Keyboard Keychron Q1
Software Funtoo Linux
Benchmark Scores 217634.24 BogoMIPS
Nvidia actually will release a cross compiler in between GLSL and SPIRV (screwing everything up for AMD again for everyone that will use it).

Err... the spec requires for a conforming [system-wide] SPIR-V compiler/translato to *at least* be able to compile GLSL code. and the official plan from the very start was "First make GLSL -> SPIR-V, then work on everything else".
That "nvidia's"[1] thing You speak of is probably VK_NV_glsl_shader, which, surprise! surprise! has made it into the core Vulkan spec since vulkan version 1.0.5 (2016-03-04). What it does is it allows loading GLSL shaders directly, skipping the translation/compilation to SPIR-V step. (i.e. instead of [GLSL code] –> [GLSL to SPIR-V compiler] –> [ISA-specific SPIR-V compiler] –> [GPU-ISA-specific machine code] it allows to do [GLSL code] –> [ISA-specific GLSL compiler] –> [GPU-ISA-specific machine code], skipping the SPIR-V step.)

[1] The only thing Nvidia owns about it is coming up with the idea and writing the extension spec. There are no IP claims (duh, no IP to claim here) and it does not depend on any hardware capabilities / [lack of] limitations, so there absolutely no reason for other vendors to not implement it. And they now have to, to conform to vulkan 1.0.5 or later. Although, unlike in OpenGL, where extensions are enabled unless explicitly disabled, in Vulkan, most of functionality, not only extensions, are disabled unless the programmer asks to enable them, if any. This is to avoid those situations that sometimes happen on OpenGL where some stuff gets implicitly enabled and unexpectedly gets in the way on code that was written without taking that stuff into account (possibly because it did not even exist at the time of writing)

P.S. Both in Vulkan and OpenGL, as long as there are no IP claims and as long as the hardware allows it, "vendor specific" extensions are not that "vendor specific" at all. They are free to implement those on their drivers. Which they often do. For example, on my Nvidia GPU, with the OpenGL implementation I have, maybe some 1/4 of all "vendor specific" the extensions implemented are under "NV" (nvidia, duh), rest being under "AMD", "ATI" and many other vendors (I count 12 different vendors here)

EDIT:
Look at first try with Talos principle, it ran worse on Vulkan(sure sure blame the dev). Look at Dota2 update, @github people are reporting actually lower performance. Stutters (obviously due to buggy shader code when casting magic). I won't expect any magic from Doom too.

Well, yeah, Vulkan needs a lot more work on the game dev side and a LOT more optimization work, again, on the game side.
And yeah, Talos released with slower Vulkan perf. Because it was still an early beta implementation and people were basically doing a beta test.
Right now, in most cases, it is actually running faster than any other renderers Talos has (it has D3D9, D3D11, OpenGL, OpenGL ES, Vulkan and a software renderers).
And no, their Vulkan renderer does not work like an OpenGL –> Vulkan wrapper.
Sauce: I am on first-name basis with their lead programmer. \_:)_/
 
Last edited:
Joined
Apr 16, 2010
Messages
2,072 (0.38/day)
System Name iJayo
Processor i7 14700k
Motherboard Asus ROG STRIX z790-E wifi
Cooling Pearless Assasi
Memory 32 gigs Corsair Vengence
Video Card(s) Nvidia RTX 2070 Super
Storage 1tb 840 evo, Itb samsung M.2 ssd 1 & 3 tb seagate hdd, 120 gig Hyper X ssd
Display(s) 42" Nec retail display monitor/ 34" Dell curved 165hz monitor
Case O11 mini
Audio Device(s) M-Audio monitors
Power Supply LIan li 750 mini
Mouse corsair Dark Saber
Keyboard Roccat Vulcan 121
Software Window 11 pro
Benchmark Scores meh... feel me on the battle field!
.....bah the X is purely psychological....Anything with x attached to it is automatically cool, mysterious and powerful like chemical X, weapon X (seriously try it with anything butterfly..X). Unlike ultra which equal failure no matter how good the product.... like ultra book, Ultra brite toothpaste (seriously who uses that)
 
Joined
Jun 22, 2015
Messages
203 (0.06/day)
You shout like gameworks never happened? Devs will use any help and if a GPU vendor gives it they will use it and nobody gives a crap about that in dev HQ's.
gamesomewhatworks doesnt even have anything to do with this "opengl scenario". when devs chose to use extensions outside the arb set, they know that they are _not_ implemented by other vendors (its outside the standard) so they know the game will crash on other setups (at most, khronos might take some of those outside spec extensions and include them on the next ogl version if they are usefull and all vendors must implement them before they can claim version compatibility, even if its a software implementation/emulation).

As I said speed and function wise OpenGL 4.5 is almost same as Vulkan actually. If you cannot bind two exact functions with different names, that ain't my problem.
no, just no.
no queue prioritization, limited fencing, almost hard set pipelining, etc...
vulkan is _not_ a resource name change, but you... errm... someone can spend some time creating a shim to emulate opengl on top of vulkan and still get some performance increase with just that
and why should that be your problem? thats the developers problem, wat da hell?

OpenGL will live further as any higher level language api using GLSL and will be the defacto for smaller indie projects, like the AAA OpenGL projects now are too (wolfenstein and doom). Just as even consoles use two types of SDK utilizing higher level more easy coding and more close to the metal. OpenGL and GLSL won't go nowhere. Development time costs a lot of money.
ofc ogl wont go anywhere, the spec will be kept frozen and vendor will maintain compatibility to it in the future for old software sake.
if devs want an half-assed implementation to cut development costs, they will stick with direct3d, easier development, easier error handling and debugging, easier device binding/management handling, etc, the driver will help you alot, even when you are doing stuff wrong. no aaa team will lose time&money porting their stuff to opengl, other than indie teams experimenting with some api just to be compliant.

Look at first try with Talos principle, it ran worse on Vulkan(sure sure blame the dev). Look at Dota2 update, @github people are reporting actually lower performance. Stutters (obviously due to buggy shader code when casting magic). I won't expect any magic from Doom too.
i guess it wasnt talos principle then, but im sure there was some company that created a shim for ogl->vulkan and it actually improved performance by more than 15% (does anyone have any insight on this? i cant remember what it was and thus cant find any links about it)
doom will run faster in vulkan than in opengl, there is no way this wont happen. unless they start to castrate the functionality or decide to not use the api as it was intended on purpose

Too much PR bullcrap IMHO. It is all raw technology and shiny term as RGB LED thingies are being packed everywhere. Just because it needs to be so ffs.
clearly you know nothing about what you are talking about, just what you read online (and not even documentation based). take a swig at it, build something with both apis, test and compare them both for yourself and then you might actually have some basis to coherently trash talk it. but making an opinion on something you know very little about (just what other told you) is not the best, i mean, come on, you have a mind of your own dont you? thats like disliking a brad of hammers just cause people online are reporting that they hit their finger nails every time they use hammers from that brand...
 
Last edited:
Joined
Jul 23, 2011
Messages
1,586 (0.32/day)
Location
Kaunas, Lithuania
System Name my box
Processor AMD Ryzen 9 5950X
Motherboard ASRock Taichi x470 Ultimate
Cooling NZXT Kraken x72
Memory 2×16GiB @ 3200MHz, some Corsair RGB led meme crap
Video Card(s) AMD [ASUS ROG STRIX] Radeon RX Vega64 [OC Edition]
Storage Samsung 970 Pro && 2× Seagate IronWolf Pro 4TB in Raid 1
Display(s) Asus VG278H + Asus VH226H
Case Fractal Design Define R6 Black TG
Audio Device(s) Using optical S/PDIF output lol
Power Supply Corsair AX1200i
Mouse Razer Naga Epic
Keyboard Keychron Q1
Software Funtoo Linux
Benchmark Scores 217634.24 BogoMIPS
i guess it wasnt talos principle then, but im sure there was some company that created a shim for ogl->vulkan and it actually improved performance by more than 15% (does anyone have any insight on this? i cant remember what it was and thus cant find any links about it)

I remember Intel showed a demo in one of the vulkan pre-release e-conferences where the demo ran faster under vulkan, where vulkan was implemented mostly, but not entirely, on top of OpenGL (so, the opposite thing: vulkan->ogl) and it already ran (Idon't rememeber exact numbers, but 15% should be ballpark) faster than directly using OpenGL.
Maybe You have this in mind?
And when it comes to games, The Talos Principle was literally the first game with Vulkan support (and was IIRC officially the "Vulkan launch title", along with being the only Vulkan game [available to public] for a while), so, quite a headscratcher what else it could be.

doom will run faster in vulkan than in opengl, there is no way this wont happen.

the good ol' mythbusters' "failure is always an option" catchphrase applies here quite a bit. You just can never tell when the devs of any game get their next random mass-brainfart and what the results would follow from it.
 
Joined
Nov 18, 2010
Messages
7,607 (1.47/day)
Location
Rīga, Latvia
System Name HELLSTAR
Processor AMD RYZEN 9 5950X
Motherboard ASUS Strix X570-E
Cooling 2x 360 + 280 rads. 3x Gentle Typhoons, 3x Phanteks T30, 2x TT T140 . EK-Quantum Momentum Monoblock.
Memory 4x8GB G.SKILL Trident Z RGB F4-4133C19D-16GTZR 14-16-12-30-44
Video Card(s) Sapphire Pulse RX 7900XTX. Water block. Crossflashed.
Storage Optane 900P[Fedora] + WD BLACK SN850X 4TB + 750 EVO 500GB + 1TB 980PRO+SN560 1TB(W11)
Display(s) Philips PHL BDM3270 + Acer XV242Y
Case Lian Li O11 Dynamic EVO
Audio Device(s) SMSL RAW-MDA1 DAC
Power Supply Fractal Design Newton R3 1000W
Mouse Razer Basilisk
Keyboard Razer BlackWidow V3 - Yellow Switch
Software FEDORA 41
does not depend on any hardware capabilities

There is one but. I only do code on pure C and assembly as a hardware oriented chap. You can tailor a compiler to be fit to architecture weaknesses or strengths... you can target specific things by just changing few variable lengths crunching through the pipelines, that automatically will cause to use additional cycles. I guess you know what about I am concerned. Despite now using a green card I also want some sort of justice towards AMD, just for the sake of fair competition. So far how much I have played with with things like from geekslab stuff for fun and testing things out. The funny thing in between machine code and this, that always there are erratas and you have to do some workarounds. The difference is that in machine code I fall back to direct assembly to bypass the darn compiler that always causes mess on certain hardware, on higher level there are broken functions, memory leaks and random bugs do to compiler issues and those are usually expensive(performance wise) to solve and causes slow downs. In the end being all patched we will get a compiler tailored for specific architecture, ain't it? It depends only on who will contribute the most at Khronos group and maintain SPIR-V. I cannot believe both camps will ever have a unified architecture and nvidia always tries some funny tricks since Riva TNT times even. You even don't need to have vendor specific extensions now, the compiler now holds the mojo, thus with that we can get most different results.

@truth teller please put of some steam. I guess you really don't want to accept a mature dialogue. OpenGL will not go away. I already explained why. They both have strengths and weaknesses.

So assuming all this information. We have Polaris. I agree on the speculation that it hasn't changed much from Fury. Just as Intel Tick phase. AMD will gain from vulkan due to crap DX11 drivers and Vulkan drivers will perform better just because they don't have to do anything than just delivering bare access to to the GPU resource, so AMD will try to play their Joker. I also read on steam the dev comments about Talos Vulkan development, wished them luck, as it is a tough job really. Luckily the game doesn't consist of complex scenes. Is he a neighbour also?

I wonder how cryengine, being a an ultimate inefficient code cemetery would run on Vulkan... I guess like a turd :D
 
Last edited:
Joined
Jun 22, 2015
Messages
203 (0.06/day)
I remember Intel showed a demo in one of the vulkan pre-release e-conferences where the demo ran faster under vulkan, where vulkan was implemented mostly, but not entirely, on top of OpenGL (so, the opposite thing: vulkan->ogl) and it already ran (Idon't rememeber exact numbers, but 15% should be ballpark) faster than directly using OpenGL.
Maybe You have this in mind?
oh i do remember that alien space ship or was it a tornado or something (run like shit for that matter) but it wasnt that, it was a couple months after that

And when it comes to games, The Talos Principle was literally the first game with Vulkan support (and was IIRC officially the "Vulkan launch title", along with being the only Vulkan game [available to public] for a while), so, quite a headscratcher what else it could be.
i dont thing the version of that game with that vulkan shim was available for the normal release cycle of that game but as a outside/beta/testing update. i could be wrong though. im gonna search a bit more

@truth teller please put of some steam. I guess you really don't want to accept a mature dialogue. OpenGL will not go away. I already explained why. They both have strengths and weaknesses.
you didnt even read my post did you? you rascal

I wonder how cryengine, being a an ultimate inefficient code cemetery would run on Vulkan... I guess like a turd :D
since cryengine has "gone opensource" and people saw the massive pile of junk that the code and tools are (and the extreme limiting license for free usage) it turned into a dead engine, well at least for me and everyone i know that was someone interested in it, no one in their right mind will touch that let alone add another api support to it (not for free at least)
 
Last edited:
Joined
Jul 23, 2011
Messages
1,586 (0.32/day)
Location
Kaunas, Lithuania
System Name my box
Processor AMD Ryzen 9 5950X
Motherboard ASRock Taichi x470 Ultimate
Cooling NZXT Kraken x72
Memory 2×16GiB @ 3200MHz, some Corsair RGB led meme crap
Video Card(s) AMD [ASUS ROG STRIX] Radeon RX Vega64 [OC Edition]
Storage Samsung 970 Pro && 2× Seagate IronWolf Pro 4TB in Raid 1
Display(s) Asus VG278H + Asus VH226H
Case Fractal Design Define R6 Black TG
Audio Device(s) Using optical S/PDIF output lol
Power Supply Corsair AX1200i
Mouse Razer Naga Epic
Keyboard Keychron Q1
Software Funtoo Linux
Benchmark Scores 217634.24 BogoMIPS
@Ferrum Master "does not depend on any hardware capabilities" was purely in the context of being to implement the VK_NV_glsl_shader extension, to compile GLSL straight to GPU ISA-specific code, just like, You know, OpenGL has been doing for years, instead of doing the Vulkan default of first translating to SPIR-V. Whether the resulting code would be better optimized or not, or whether the resulting code would use up the hardware efficiently is of no concern here.
So yes, in that sense, that extension does not depend on any hardware capabilities other than being able to, well, run shader code to begin with.
When it comes to these graphics API specs, the only hardware-related concern is "does the hardware lack something that makes it straight impossible to implement this part of the spec?". e.g. "we want tessellation. Can this hardware do that? Does it have the required logic for it?"
Although, do keep in mind that when it comes to OpenGL, at least, it is perfectly conformant behaviour to instead of using hardware acceleration, perform [whatever] in software. Either full hardware acceleration, mixed hw acceleration with software "emulation" and purely running in software are all fully legit in the eyes of the spec. As long as it is producing correct results, the driver can claim support for a capability / extension, regardless if done on hardware or in software.
Actually, small bits of it are still sometimes done in software. "And You will never notice if it is done right."
Direct3D, on the other hand, AFAIK, quite strictly defines what has to be done in hardware...

I do see what You did there, though. You took a quote out of context, to use it as a "seed" to make an unrelated point. Don't do that. It's kind of a d*** move. We are all adults here – if You want to make a point, just simply do so. No need for an out-of-context quote to "justify" making the point ;]

P.S. I know there's a predisposition that "software rendering == slow". That is often true, but not always. I have a three different software OpenGL implementations installed that I can use at will if I want to. (I mostly use them for validating stuff). The point is, though: since I have a beefy CPU, I can run some fairly recent and fairly graphics-intensive games purely in software and still get playable framerates. "not too shabby for purely software rendering, eh?"

P.P.S. That's it: I'm out. This has already gone off topic enough and I seem to be writing walls of text, from certain point of view, mostly for naught.
Thus, this is my last reply on this thread. Peace out, bros!
 
Joined
Apr 16, 2010
Messages
2,072 (0.38/day)
System Name iJayo
Processor i7 14700k
Motherboard Asus ROG STRIX z790-E wifi
Cooling Pearless Assasi
Memory 32 gigs Corsair Vengence
Video Card(s) Nvidia RTX 2070 Super
Storage 1tb 840 evo, Itb samsung M.2 ssd 1 & 3 tb seagate hdd, 120 gig Hyper X ssd
Display(s) 42" Nec retail display monitor/ 34" Dell curved 165hz monitor
Case O11 mini
Audio Device(s) M-Audio monitors
Power Supply LIan li 750 mini
Mouse corsair Dark Saber
Keyboard Roccat Vulcan 121
Software Window 11 pro
Benchmark Scores meh... feel me on the battle field!
radeon 480 is only $199 so Amd wants you to buy two to compete with the 10 series from Nvdia
 

FordGT90Concept

"I go fast!1!11!1!"
Joined
Oct 13, 2008
Messages
26,263 (4.42/day)
Location
IA, USA
System Name BY-2021
Processor AMD Ryzen 7 5800X (65w eco profile)
Motherboard MSI B550 Gaming Plus
Cooling Scythe Mugen (rev 5)
Memory 2 x Kingston HyperX DDR4-3200 32 GiB
Video Card(s) AMD Radeon RX 7900 XT
Storage Samsung 980 Pro, Seagate Exos X20 TB 7200 RPM
Display(s) Nixeus NX-EDG274K (3840x2160@144 DP) + Samsung SyncMaster 906BW (1440x900@60 HDMI-DVI)
Case Coolermaster HAF 932 w/ USB 3.0 5.25" bay + USB 3.2 (A+C) 3.5" bay
Audio Device(s) Realtek ALC1150, Micca OriGen+
Power Supply Enermax Platimax 850w
Mouse Nixeus REVEL-X
Keyboard Tesoro Excalibur
Software Windows 10 Home 64-bit
Benchmark Scores Faster than the tortoise; slower than the hare.
All of my hopes and dreams are dashed. :(

It's a good card, no doubt, but having to wait for Vega to get a response to GTX 1080 is going to suck.
 
Joined
Oct 22, 2014
Messages
14,195 (3.79/day)
Location
Sunshine Coast
System Name H7 Flow 2024
Processor AMD 5800X3D
Motherboard Asus X570 Tough Gaming
Cooling Custom liquid
Memory 32 GB DDR4
Video Card(s) Intel ARC A750
Storage Crucial P5 Plus 2TB.
Display(s) AOC 24" Freesync 1m.s. 75Hz
Mouse Lenovo
Keyboard Eweadn Mechanical
Software W11 Pro 64 bit
radeon 480 is only $199 so Amd wants you to buy two to compete with the 10 series from Nvdia
And for those happy with GTX 970 performance, just buy one and save money on the purchase price, and electricity consumption.
 
Joined
Sep 17, 2014
Messages
22,933 (6.07/day)
Location
The Washing Machine
System Name Tiny the White Yeti
Processor 7800X3D
Motherboard MSI MAG Mortar b650m wifi
Cooling CPU: Thermalright Peerless Assassin / Case: Phanteks T30-120 x3
Memory 32GB Corsair Vengeance 30CL6000
Video Card(s) ASRock RX7900XT Phantom Gaming
Storage Lexar NM790 4TB + Samsung 850 EVO 1TB + Samsung 980 1TB + Crucial BX100 250GB
Display(s) Gigabyte G34QWC (3440x1440)
Case Lian Li A3 mATX White
Audio Device(s) Harman Kardon AVR137 + 2.1
Power Supply EVGA Supernova G2 750W
Mouse Steelseries Aerox 5
Keyboard Lenovo Thinkpad Trackpoint II
VR HMD HD 420 - Green Edition ;)
Software W11 IoT Enterprise LTSC
Benchmark Scores Over 9000
radeon 480 is only $199 so Amd wants you to buy two to compete with the 10 series from Nvdia

AMD dropping the ball -err GPU. Literally.

What can we say? They still haven't learned. They present a way to break open the market with a guy that has broken english, no PR skills and nearly broke the damn GPU as well. Linus nearly needs to drag the info out of him.

I mean they could have done this so much better. A 480 at 199 bucks is pretty astounding. Why put it out so clumsily and so vaguely!?!? If they drop Hawaii performance at 199, that's going to turn heads.
 
Top