Sunday, May 29th 2016

Next-Gen Radeon "Polaris" Nomenclature Changed?

It looks like AMD is deviating from its top-level performance-grading, with its next-generation Radeon graphics cards. The company has maintained the Radeon R3 series for embedded low-power APUs, Radeon R5 for integrated graphics solutions of larger APUs; Radeon R7 for entry-thru-mainstream discrete GPUs (eg: R7 360, R7 370); and Radeon R9 for performance-thru-enthusiast segment (eg: R9 285, R9 290X). The new nomenclature could see it rely on the second set of model numbers (eg: 4#0) to denote market-positioning, if a popular rumor on tech bulletin boards such as Reddit holds true.

A Redditor posted an image of a next-gen AMD Radeon demo machine powered by a "Radeon RX 480." Either "X" could be a variable, or it could be series-wide, prefixing all SKUs in the 400 series. It could also be AMD marketing's way of somehow playing with the number 10 (X), to establish some kind of generational parity with NVIDIA's GeForce GTX 10 series. The placard also depicts a new "Radeon" logo with a different, sharper typeface. The "RX 480" was apparently able to run "Doom" (2016) at 2560x1440 @ 144 Hz, with the OpenGL API.
Source: Reddit
Add your own comment

72 Comments on Next-Gen Radeon "Polaris" Nomenclature Changed?

#51
medi01
FordGT90ConceptIn the meantime, you'll still get a ~40% boost from having the second GPU through Crossfire.
The strategy makes sense on paper, I admit.
Smaller chips are easier to design.
Consoles are in AMD's pocket.
Yadayada master plan.

But then, the only config faster than 1070 will be dual chip.
Many users avoid such configs, because at least at the moment they do give trouble.
Not only could you NOT get any boost from second GPU chip, but it might crash on you altogether.

If you price it higher than 1070, people won't buy it.
Also: 232mm2 + 232mm2 = 464mm2

And with single chip being 115-140w, dual chip will be somewhere on 170-210w levels, so more than 1080.

PS
Heck, 295x wipes the floor with TitaniumX/980Ti OC in so many games...
Posted on Reply
#52
FordGT90Concept
"I go fast!1!11!1!"
Vega is the answer to GTX 1080, not Polaris. The fact Polaris is likely to be about equal to GTX 1070 is in AMD's favor.

As multi-GPU code in games takes over, Crossfire/SLI becomes less important. That should translate to fewer crashes.

Yeah, power will be more but that is no surprise. GCN is capable of async multithreading and that comes at a cost. GPUs can't power down parts like CPUs can because their pipelines are much more complex.

295X2 launched at $1500. RX 480 I could see launching at $600-700 which puts it squarely in the same price bracket as GTX 1080.
Posted on Reply
#53
Vayra86
medi01The strategy makes sense on paper, I admit.
Smaller chips are easier to design.
Consoles are in AMD's pocket.
Yadayada master plan.

But then, the only config faster than 1070 will be dual chip.
Many users avoid such configs, because at least at the moment they do give trouble.
Not only could you NOT get any boost from second GPU chip, but it might crash on you altogether.

If you price it higher than 1070, people won't buy it.
Also: 232mm2 + 232mm2 = 464mm2

And with single chip being 115-140w, dual chip will be somewhere on 170-210w levels, so more than 1080.

PS
Heck, 295x wipes the floor with TitaniumX/980Ti OC in so many games...
You know, I actually see a repeat of past mistakes if they invest heavy on dual GPU as a baseline for performance scaling. Sure DX12 and API + console development screams for it. But...

Haven't we seen this fail miserably already with the FX processor line? It is the same, two = one for more performance and 'more cores' approach they've adopted so many times. I really do hope AMD coincides the dual GPU solution with some very big steps in the GCN arch as well, most notably with regards to perf/watt and efficiency or this will fail. So far it's looking promising, but we've also been thére before.

FWIW, I really do hope AMD surprises us in a good way, and makes choosing for them this time around a true no-brainer. They need it and with Nvidia releasing 1080 at this price point, the market evidently also needs it.
Posted on Reply
#54
medi01
Vayra86You know, I actually see a repeat of past mistakes if they invest heavy on dual GPU as a baseline for performance scaling. Sure DX12 and API + console development screams for it. But...

Haven't we seen this fail miserably already with the FX processor line? I really do hope AMD coincides the dual GPU solution with some very big steps in the GCN arch as well, most notably with regards to perf/watt and efficiency or this will fail. So far it's looking promising, but we've also been thére before.
The problem is, AMD's 7000k employees out of 8000k are engineers. (can't get much lower non-engineer force than that)
They R&D in both GPUs and CPUs, yet their budget is smaller than nVidia's alone (and like 10 times smaller than Intel's).

And then there is that "smaller chips are cheaper to design" thing.

This time they have API support and consoles (nothing like that was going for them in Buldozer times), so let's see how it goes. I'll keep my fingers crossed. =/
Posted on Reply
#55
Frick
Fishfaced Nincompoop
Vayra86You know, I actually see a repeat of past mistakes if they invest heavy on dual GPU as a baseline for performance scaling. Sure DX12 and API + console development screams for it. But...

Haven't we seen this fail miserably already with the FX processor line? I really do hope AMD coincides the dual GPU solution with some very big steps in the GCN arch as well, most notably with regards to perf/watt and efficiency or this will fail. So far it's looking promising, but we've also been thére before.

FWIW, I really do hope AMD surprises us in a good way, and makes choosing for them this time around a true no-brainer. They need it.
Things are a bit different now though. Pure grunt is still king but that will only get you so far. It might be a bit early though, if that is the direction they're going. Bulldozer was way too early.
Posted on Reply
#56
FordGT90Concept
"I go fast!1!11!1!"
Bulldozer was just a terrible architecture all around. It was made to be different only for the sake of being different. There was nothing rational about its design.

The better comparison is Athlon 64 FX-60: a dual-core CPU when single-core CPUs were all the rage. Just 10 years later and budget CPUs are dual-cores.

If the speculation is correct that RX 480 is a dual-GPU Polaris 10, it could be the equivalent of an Athlon 64 X2--the affordable alternative to the FX-60; the series of chips that slowly but surely conquered the market until Core 2 Duo debuted. Before this, all that was available on the market were FX-60s (the biggest, baddest GPUs with a price to match).
Posted on Reply
#57
Ferrum Master
VinskaFirst of all, GLSL is a shading language and is itself a separate spec from OpenGL
In OpenGL almost everything is a fragmented vendor specific mess. OpenGL 4.5 actually supports almost everything that SPIR-V does, well it should. Nvidia actually will release a cross compiler in between GLSL and SPIRV (screwing everything up for AMD again for everyone that will use it).

And normal devs are not usually lazy, they are not plain stupid breaking things and adding unneeded time and money costs. Vulkan in current state is still more a PR gimmick. I cannot see anyone writing in pure C# their engine and engine creation tools from scratch. UE4 and others are still in development. It consists of functions rooted in openGL. Same functions doing same, just a compiler that translates to SPIRV and fed to the vulkan driver to render the scene. OPENGL doesn't disappear anywhere.
Posted on Reply
#58
truth teller
Ferrum MasterWrong. As long it uses GLSL it will use opengl as a core in application part. I haven't seen a native Vulkan engine. All Vulkan enabled games we have are vulkan ports with actually less efficiency as it actually should be. It is actually as always a code mess in reality than the all adverts tell you. ID won't recode all of their engine just in a flash, economically they won't even bother to do it.
what is spir-v? no company worth their weight in salt will continue to ship glsl stuff anymore (shader source shipped to clients, client side compilation, spir-v is compiled beforehand, no ip loss, yada yada).
you can write a translation layer between vulkan and a ogl application (has been done, cant find links) and that alone _will_ improve performance, no one in their right mind will call that a vulkan enabled application but still it works.

have you seen and coded all the game engines available that use vulkan? have you even coded anything that uses vulkan/ogl/olges for that matter? why are you trying to pass your half assed assumptions as truths? gosh...
Ferrum MasterIn OpenGL almost everything is a fragmented vendor specific mess. OpenGL 4.5 actually supports almost everything that SPIR-V does, well it should. Nvidia actually will release a cross compiler in between GLSL and SPIRV (screwing everything up for AMD again for everyone that will use it).

And normal devs are not usually lazy, they are not plain stupid breaking things and adding unneeded time and money costs. Vulkan in current state is still more a PR gimmick. I cannot see anyone writing in pure C# their engine and engine creation tools from scratch. UE4 and others are still in development. It consists of functions rooted in openGL. Same functions doing same, just a compiler that translates to SPIRV and fed to the vulkan driver to render the scene. OPENGL doesn't disappear anywhere.
ok no need to answer anything, i can see what the response would be already, you cant even distinguish between the standard and vendor specific extensions (and those who chose to use them know damn well what happens)...
Posted on Reply
#59
TheinsanegamerN
FordGT90ConceptLet's do some math!

RX 480
2560 SPs
unknown clock

R9 390
2560 SPs
1000 MHz

R9 390 got 57.4 fps so lets scale it:

1.0 GHz = 57.4 fps
1.1 GHz = 63.1 fps
1.2 GHz = 68.9 fps
1.3 GHz = 74.6 fps
1.4 GHz = 80.4 fps
1.5 GHz = 86.1 fps
1.6 GHz = 91.8 fps
1.7 GHz = 97.6 fps (beats GTX 1080)

I doubt it will be clocked higher than 1.5 GHz (1.1-1.2 GHz is the most realistic). Unless AMD made some massive improvements to OpenGL rendering, the RX 480 isn't likely to top GTX 1080.
Your math assumes that 1 polaris SP=1 hawaii SP, and that amd made absolutly zero improvements to IPC, color/texture compression, ece.
Posted on Reply
#60
FordGT90Concept
"I go fast!1!11!1!"
There's some improvement to Polaris' architecture but I doubt it will account for much. Polaris is mostly about the process tech change which impacts power consumption, transistor count, and clockspeeds. AMD is focusing on low power consumption, the transistor count isn't changing much (actually fewer SPs from 390X), and AMD hasn't given us any indication that the clockspeeds are changing much either (predominantly to keep the power consumption low). The data we do have strongly suggests Fury-like performance for under 150w. It should be close to the GTX 1070 in virtually every way.
Posted on Reply
#61
Ferrum Master
truth teller(and those who chose to use them know damn well what happens)...
You shout like gameworks never happened? Devs will use any help and if a GPU vendor gives it they will use it and nobody gives a crap about that in dev HQ's. As I said speed and function wise OpenGL 4.5 is almost same as Vulkan actually. If you cannot bind two exact functions with different names, that ain't my problem. OpenGL will live further as any higher level language api using GLSL and will be the defacto for smaller indie projects, like the AAA OpenGL projects now are too (wolfenstein and doom). Just as even consoles use two types of SDK utilizing higher level more easy coding and more close to the metal. OpenGL and GLSL won't go nowhere. Development time costs a lot of money.

VULKAN doesn't guarantee performance improvement at all. Especially in GPU bound games, and Doom can run on a P4 coffee machine as proven in here in forums, on contrary it may introduce less performance, especially at high resolutions. It's same as with Mantle it was... actually it is just MantleGL. Same problems and average resume. More stutter, no difference with any reasonable i5 (sadly it means faster than any AMD CPU up to date now).

Look at first try with Talos principle, it ran worse on Vulkan(sure sure blame the dev). Look at Dota2 update, @github people are reporting actually lower performance. Stutters (obviously due to buggy shader code when casting magic). I won't expect any magic from Doom too.

Too much PR bullcrap IMHO. It is all raw technology and shiny term as RGB LED thingies are being packed everywhere. Just because it needs to be so ffs.
Posted on Reply
#62
librin.so.1
Ferrum MasterNvidia actually will release a cross compiler in between GLSL and SPIRV (screwing everything up for AMD again for everyone that will use it).
Err... the spec requires for a conforming [system-wide] SPIR-V compiler/translato to *at least* be able to compile GLSL code. and the official plan from the very start was "First make GLSL -> SPIR-V, then work on everything else".
That "nvidia's"[1] thing You speak of is probably VK_NV_glsl_shader, which, surprise! surprise! has made it into the core Vulkan spec since vulkan version 1.0.5 (2016-03-04). What it does is it allows loading GLSL shaders directly, skipping the translation/compilation to SPIR-V step. (i.e. instead of [GLSL code] –> [GLSL to SPIR-V compiler] –> [ISA-specific SPIR-V compiler] –> [GPU-ISA-specific machine code] it allows to do [GLSL code] –> [ISA-specific GLSL compiler] –> [GPU-ISA-specific machine code], skipping the SPIR-V step.)

[1] The only thing Nvidia owns about it is coming up with the idea and writing the extension spec. There are no IP claims (duh, no IP to claim here) and it does not depend on any hardware capabilities / [lack of] limitations, so there absolutely no reason for other vendors to not implement it. And they now have to, to conform to vulkan 1.0.5 or later. Although, unlike in OpenGL, where extensions are enabled unless explicitly disabled, in Vulkan, most of functionality, not only extensions, are disabled unless the programmer asks to enable them, if any. This is to avoid those situations that sometimes happen on OpenGL where some stuff gets implicitly enabled and unexpectedly gets in the way on code that was written without taking that stuff into account (possibly because it did not even exist at the time of writing)

P.S. Both in Vulkan and OpenGL, as long as there are no IP claims and as long as the hardware allows it, "vendor specific" extensions are not that "vendor specific" at all. They are free to implement those on their drivers. Which they often do. For example, on my Nvidia GPU, with the OpenGL implementation I have, maybe some 1/4 of all "vendor specific" the extensions implemented are under "NV" (nvidia, duh), rest being under "AMD", "ATI" and many other vendors (I count 12 different vendors here)

EDIT:
Ferrum MasterLook at first try with Talos principle, it ran worse on Vulkan(sure sure blame the dev). Look at Dota2 update, @github people are reporting actually lower performance. Stutters (obviously due to buggy shader code when casting magic). I won't expect any magic from Doom too.
Well, yeah, Vulkan needs a lot more work on the game dev side and a LOT more optimization work, again, on the game side.
And yeah, Talos released with slower Vulkan perf. Because it was still an early beta implementation and people were basically doing a beta test.
Right now, in most cases, it is actually running faster than any other renderers Talos has (it has D3D9, D3D11, OpenGL, OpenGL ES, Vulkan and a software renderers).
And no, their Vulkan renderer does not work like an OpenGL –> Vulkan wrapper.
Sauce: I am on first-name basis with their lead programmer. \_:)_/
Posted on Reply
#63
ensabrenoir
.....bah the X is purely psychological....Anything with x attached to it is automatically cool, mysterious and powerful like chemical X, weapon X (seriously try it with anything butterfly..X). Unlike ultra which equal failure no matter how good the product.... like ultra book, Ultra brite toothpaste (seriously who uses that)
Posted on Reply
#64
truth teller
Ferrum MasterYou shout like gameworks never happened? Devs will use any help and if a GPU vendor gives it they will use it and nobody gives a crap about that in dev HQ's.
gamesomewhatworks doesnt even have anything to do with this "opengl scenario". when devs chose to use extensions outside the arb set, they know that they are _not_ implemented by other vendors (its outside the standard) so they know the game will crash on other setups (at most, khronos might take some of those outside spec extensions and include them on the next ogl version if they are usefull and all vendors must implement them before they can claim version compatibility, even if its a software implementation/emulation).
Ferrum MasterAs I said speed and function wise OpenGL 4.5 is almost same as Vulkan actually. If you cannot bind two exact functions with different names, that ain't my problem.
no, just no.
no queue prioritization, limited fencing, almost hard set pipelining, etc...
vulkan is _not_ a resource name change, but you... errm... someone can spend some time creating a shim to emulate opengl on top of vulkan and still get some performance increase with just that
and why should that be your problem? thats the developers problem, wat da hell?
Ferrum MasterOpenGL will live further as any higher level language api using GLSL and will be the defacto for smaller indie projects, like the AAA OpenGL projects now are too (wolfenstein and doom). Just as even consoles use two types of SDK utilizing higher level more easy coding and more close to the metal. OpenGL and GLSL won't go nowhere. Development time costs a lot of money.
ofc ogl wont go anywhere, the spec will be kept frozen and vendor will maintain compatibility to it in the future for old software sake.
if devs want an half-assed implementation to cut development costs, they will stick with direct3d, easier development, easier error handling and debugging, easier device binding/management handling, etc, the driver will help you alot, even when you are doing stuff wrong. no aaa team will lose time&money porting their stuff to opengl, other than indie teams experimenting with some api just to be compliant.
Ferrum MasterLook at first try with Talos principle, it ran worse on Vulkan(sure sure blame the dev). Look at Dota2 update, @github people are reporting actually lower performance. Stutters (obviously due to buggy shader code when casting magic). I won't expect any magic from Doom too.
i guess it wasnt talos principle then, but im sure there was some company that created a shim for ogl->vulkan and it actually improved performance by more than 15% (does anyone have any insight on this? i cant remember what it was and thus cant find any links about it)
doom will run faster in vulkan than in opengl, there is no way this wont happen. unless they start to castrate the functionality or decide to not use the api as it was intended on purpose
Ferrum MasterToo much PR bullcrap IMHO. It is all raw technology and shiny term as RGB LED thingies are being packed everywhere. Just because it needs to be so ffs.
clearly you know nothing about what you are talking about, just what you read online (and not even documentation based). take a swig at it, build something with both apis, test and compare them both for yourself and then you might actually have some basis to coherently trash talk it. but making an opinion on something you know very little about (just what other told you) is not the best, i mean, come on, you have a mind of your own dont you? thats like disliking a brad of hammers just cause people online are reporting that they hit their finger nails every time they use hammers from that brand...
Posted on Reply
#65
librin.so.1
truth telleri guess it wasnt talos principle then, but im sure there was some company that created a shim for ogl->vulkan and it actually improved performance by more than 15% (does anyone have any insight on this? i cant remember what it was and thus cant find any links about it)
I remember Intel showed a demo in one of the vulkan pre-release e-conferences where the demo ran faster under vulkan, where vulkan was implemented mostly, but not entirely, on top of OpenGL (so, the opposite thing: vulkan->ogl) and it already ran (Idon't rememeber exact numbers, but 15% should be ballpark) faster than directly using OpenGL.
Maybe You have this in mind?
And when it comes to games, The Talos Principle was literally the first game with Vulkan support (and was IIRC officially the "Vulkan launch title", along with being the only Vulkan game [available to public] for a while), so, quite a headscratcher what else it could be.
truth tellerdoom will run faster in vulkan than in opengl, there is no way this wont happen.
the good ol' mythbusters' "failure is always an option" catchphrase applies here quite a bit. You just can never tell when the devs of any game get their next random mass-brainfart and what the results would follow from it.
Posted on Reply
#66
Ferrum Master
Vinskadoes not depend on any hardware capabilities
There is one but. I only do code on pure C and assembly as a hardware oriented chap. You can tailor a compiler to be fit to architecture weaknesses or strengths... you can target specific things by just changing few variable lengths crunching through the pipelines, that automatically will cause to use additional cycles. I guess you know what about I am concerned. Despite now using a green card I also want some sort of justice towards AMD, just for the sake of fair competition. So far how much I have played with with things like from geekslab stuff for fun and testing things out. The funny thing in between machine code and this, that always there are erratas and you have to do some workarounds. The difference is that in machine code I fall back to direct assembly to bypass the darn compiler that always causes mess on certain hardware, on higher level there are broken functions, memory leaks and random bugs do to compiler issues and those are usually expensive(performance wise) to solve and causes slow downs. In the end being all patched we will get a compiler tailored for specific architecture, ain't it? It depends only on who will contribute the most at Khronos group and maintain SPIR-V. I cannot believe both camps will ever have a unified architecture and nvidia always tries some funny tricks since Riva TNT times even. You even don't need to have vendor specific extensions now, the compiler now holds the mojo, thus with that we can get most different results.

@truth teller please put of some steam. I guess you really don't want to accept a mature dialogue. OpenGL will not go away. I already explained why. They both have strengths and weaknesses.

So assuming all this information. We have Polaris. I agree on the speculation that it hasn't changed much from Fury. Just as Intel Tick phase. AMD will gain from vulkan due to crap DX11 drivers and Vulkan drivers will perform better just because they don't have to do anything than just delivering bare access to to the GPU resource, so AMD will try to play their Joker. I also read on steam the dev comments about Talos Vulkan development, wished them luck, as it is a tough job really. Luckily the game doesn't consist of complex scenes. Is he a neighbour also?

I wonder how cryengine, being a an ultimate inefficient code cemetery would run on Vulkan... I guess like a turd :D
Posted on Reply
#67
truth teller
VinskaI remember Intel showed a demo in one of the vulkan pre-release e-conferences where the demo ran faster under vulkan, where vulkan was implemented mostly, but not entirely, on top of OpenGL (so, the opposite thing: vulkan->ogl) and it already ran (Idon't rememeber exact numbers, but 15% should be ballpark) faster than directly using OpenGL.
Maybe You have this in mind?
oh i do remember that alien space ship or was it a tornado or something (run like shit for that matter) but it wasnt that, it was a couple months after that
VinskaAnd when it comes to games, The Talos Principle was literally the first game with Vulkan support (and was IIRC officially the "Vulkan launch title", along with being the only Vulkan game [available to public] for a while), so, quite a headscratcher what else it could be.
i dont thing the version of that game with that vulkan shim was available for the normal release cycle of that game but as a outside/beta/testing update. i could be wrong though. im gonna search a bit more
Ferrum Master@truth teller please put of some steam. I guess you really don't want to accept a mature dialogue. OpenGL will not go away. I already explained why. They both have strengths and weaknesses.
you didnt even read my post did you? you rascal
Ferrum MasterI wonder how cryengine, being a an ultimate inefficient code cemetery would run on Vulkan... I guess like a turd :D
since cryengine has "gone opensource" and people saw the massive pile of junk that the code and tools are (and the extreme limiting license for free usage) it turned into a dead engine, well at least for me and everyone i know that was someone interested in it, no one in their right mind will touch that let alone add another api support to it (not for free at least)
Posted on Reply
#68
librin.so.1
@Ferrum Master "does not depend on any hardware capabilities" was purely in the context of being to implement the VK_NV_glsl_shader extension, to compile GLSL straight to GPU ISA-specific code, just like, You know, OpenGL has been doing for years, instead of doing the Vulkan default of first translating to SPIR-V. Whether the resulting code would be better optimized or not, or whether the resulting code would use up the hardware efficiently is of no concern here.
So yes, in that sense, that extension does not depend on any hardware capabilities other than being able to, well, run shader code to begin with.
When it comes to these graphics API specs, the only hardware-related concern is "does the hardware lack something that makes it straight impossible to implement this part of the spec?". e.g. "we want tessellation. Can this hardware do that? Does it have the required logic for it?"
Although, do keep in mind that when it comes to OpenGL, at least, it is perfectly conformant behaviour to instead of using hardware acceleration, perform [whatever] in software. Either full hardware acceleration, mixed hw acceleration with software "emulation" and purely running in software are all fully legit in the eyes of the spec. As long as it is producing correct results, the driver can claim support for a capability / extension, regardless if done on hardware or in software.
Actually, small bits of it are still sometimes done in software. "And You will never notice if it is done right."
Direct3D, on the other hand, AFAIK, quite strictly defines what has to be done in hardware...

I do see what You did there, though. You took a quote out of context, to use it as a "seed" to make an unrelated point. Don't do that. It's kind of a d*** move. We are all adults here – if You want to make a point, just simply do so. No need for an out-of-context quote to "justify" making the point ;]

P.S. I know there's a predisposition that "software rendering == slow". That is often true, but not always. I have a three different software OpenGL implementations installed that I can use at will if I want to. (I mostly use them for validating stuff). The point is, though: since I have a beefy CPU, I can run some fairly recent and fairly graphics-intensive games purely in software and still get playable framerates. "not too shabby for purely software rendering, eh?"

P.P.S. That's it: I'm out. This has already gone off topic enough and I seem to be writing walls of text, from certain point of view, mostly for naught.
Thus, this is my last reply on this thread. Peace out, bros!
Posted on Reply
#69
ensabrenoir
radeon 480 is only $199 so Amd wants you to buy two to compete with the 10 series from Nvdia
Posted on Reply
#70
FordGT90Concept
"I go fast!1!11!1!"
All of my hopes and dreams are dashed. :(

It's a good card, no doubt, but having to wait for Vega to get a response to GTX 1080 is going to suck.
Posted on Reply
#71
Caring1
ensabrenoirradeon 480 is only $199 so Amd wants you to buy two to compete with the 10 series from Nvdia
And for those happy with GTX 970 performance, just buy one and save money on the purchase price, and electricity consumption.
Posted on Reply
#72
Vayra86
ensabrenoirradeon 480 is only $199 so Amd wants you to buy two to compete with the 10 series from Nvdia
AMD dropping the ball -err GPU. Literally.

What can we say? They still haven't learned. They present a way to break open the market with a guy that has broken english, no PR skills and nearly broke the damn GPU as well. Linus nearly needs to drag the info out of him.

I mean they could have done this so much better. A 480 at 199 bucks is pretty astounding. Why put it out so clumsily and so vaguely!?!? If they drop Hawaii performance at 199, that's going to turn heads.
Posted on Reply
Add your own comment
Jan 19th, 2025 12:48 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts