Tuesday, June 11th 2024

Possible Specs of NVIDIA GeForce "Blackwell" GPU Lineup Leaked

Possible specifications of the various NVIDIA GeForce "Blackwell" gaming GPUs were leaked to the web by Kopite7kimi, a reliable source with NVIDIA leaks. These are specs of the maxed out silicon, NVIDIA will carve out several GeForce RTX 50-series SKUs based on these chips, which could end up with lower shader counts than those shown here. We've known from older reports that there will be five chips in all, the GB202 being the largest, followed by the GB203, the GB205, the GB206, and the GB207. There is a notable absence of a successor to the AD104, GA104, and TU104, because NVIDIA is trying a slightly different way to approach the performance segment with this generation.

The GB202 is the halo segment chip that will drive the possible RTX 5090 (RTX 4090 successor). This chip is endowed with 192 streaming multiprocessors (SM), or 96 texture processing clusters (TPCs). These 96 TPCs are spread across 12 graphics processing clusters (GPCs), which each have 8 of them. Assuming that "Blackwell" has the same 256 CUDA cores per TPC that the past several generations of NVIDIA gaming GPUs have had, we end up with a total CUDA core count of 24,576. Another interesting aspect about this mega-chip is memory. The GPU implements the next-generation GDDR7 memory, and uses a mammoth 512-bit memory bus. Assuming the 28 Gbps memory speed that was being rumored for NVIDIA's "Blackwell" generation, this chip has 1,792 GB/s of memory bandwidth on tap!
The GB203 is the next chip in the series, and poised to be a successor in name to the current AD103. It generationally reduces the shader counts, counting on the architecture and clock speeds to more than come through for performance; while retaining the 256-bit bus width of the AD103. The net result could be a significantly smaller GPU than the AD103, for better performance. The GB203 is endowed with 10,752 CUDA cores, spread across 84 SM (42 TPCs). The chip has 7 GPCs, each with 6 TPCs. The memory bus, as we mentioned, is 256-bit, and at a memory speed of 28 Gbps, would yield 896 GB/s of bandwidth.

The GB205 will power the lower half of the performance segment in the GeForce "Blackwell" generation. This chip has a rather surprising CUDA core count of just 6,400, spread across 50 SM, which are arranged in 5 GPCs of 5 TPCs, each. The memory bus width is 192-bit. For 28 Gbps, this would result in 672 GB/s of memory bandwidth.

The GB206 drives the mid-range of the series. This chip gets very close to matching the CUDA core count of the GB205, with 6,144 of them. These are spread across 36 SM (18 TPCs). The 18 TPCs span 3 GPCs of 6 TPCs, each. The key differentiator between the GB205 and GB206 is memory bus width, which is narrowed to 128-bit for the GB206. With the same 28 Gbps memory speed being used here, such a chip would end up with 448 GB/s of memory bandwidth.

At the entry level, there is the GB207, a significantly smaller chip with just 2,560 CUDA cores, across 10 SM, spanning two GPCs of 5 TPCs, each. The memory bus width is unchanged at 128-bit, but the memory type used is the older generation GDDR6. Assuming NVIDIA uses 18 Gbps memory speeds, it ends up with 288 GB/s on tap.

NVIDIA is expected to double down on large on-die caches on all its chips, to cushion the memory sub-systems. We expect there to be several other innovations in the areas of ray tracing performance, AI acceleration, and certain other features exclusive to the architecture. The company is expected to debut the series some time in Q4-2024.
Source: kopite7kimi (Twitter)
Add your own comment

141 Comments on Possible Specs of NVIDIA GeForce "Blackwell" GPU Lineup Leaked

#26
Dristun
DavenNvidia stops wasting resources on RT and AI.
AMD's already all-in on AI just like everyone else in the market, lol, and we're just one gen away from seeing if they're going to finally improve their RT. What are you going to do if they follow nvidia?
Posted on Reply
#27
Assimilator
Vayra86Yeah! More stagnation! Let's vote for stagnation!

I don't know what you're looking at, but I'm seeing a slight uptick per tier, with all things increased except the bus width. GDDR7 makes up for part of the deficit though so bandwidth won't be worse than Ada relatively, that's good. But capacity, still 12GB in the midrange and 8GB bottom end? You're saying this is a good thing, now?
No, I'm saying it's good enough without sufficient competition.
Vayra86Ada is already bandwidth constrained at the lower tiers. Nvidia is trying real hard to keep those tiers to what, 1080p gaming?
Every company wants to segment their products. When there's no competition that becomes a lot easier.
Vayra86To each their own, but I think in 2025 people would like to move on from 1080p.
I've only played in 1440p since 2019. The 4060 Ti I switched to earlier this year has not given me any problems in this regard despite "only" 8GB VRAM and "only" a 128-bit bus. The only people you regularly see bemoaning NVIDIA GPUs are not the people who own one.
Vayra86As for AMD's inability to compete... RT fools & money were parted here. AMD keeps pace just fine in actual gaming and raster perf and is/has been on many occasions cheaper. They compete better than they have done in the past. Customers just buy Nvidia, and if that makes them feel 'screwed over'... yeah... a token of the snowflake generation, that also doesn't vote and then wonders why the world's going to shit.
It's nothing to do with RT and everything to do with marketing and advertising. When people read the news they see "NVIDIA" due to the AI hype, and that has had a significant and quantifiable impression on ordinary consumers' minds. AMD has completely failed to understand this basic concept, they seem to be operating on the assumption that having a slightly worse product at a slightly lower price point is good enough, and the market has very obviously shown that it absolutely is not. AMD has options to fight back against the mindshare that NVIDIA has with things like price cuts, but again because AMD doesn't understand they need to do this, they aren't.

Let's make it clear here, AMD is staring down the barrel regarding GPUs. The last 7 quarters are the worst for them since Jon Peddie Research started tracking this metric a decade ago, they had never dropped under 18% until Q3 2022, and with the upcoming Blackwell launch and nothing new from AMD we can expect NVIDIA to breach 90% of the desktop GPU market. That is annihilation territory for AMD GPUs, that is territory where they consider exiting the desktop consumer market and concentrate on consoles only. That is territory where your company should start pulling out all the stops to recover, yet what is AMD doing in response? Literally nothing.

And it all compounds. If NVIDIA believes they're going to outsell AMD by 9:1, NVIDIA is going to book out 9x as much capacity at TSMC, which gives them a much larger volume discount than AMD will get, which means AMDs GPUs cost more; AIBs will have the same issue with all the other components they use like memory chips, PCBs, ... Once you start losing economies of scale and the associated discounts you get into even worse of a position regarding being able to manipulate your prices to compete.
Posted on Reply
#28
Vayra86
AssimilatorLet's make it clear here, AMD is staring down the barrel regarding GPUs. The last 7 quarters are the worst for them since Jon Peddie Research started tracking this metric a decade ago, they had never dropped under 18% until Q3 2022, and with the upcoming Blackwell launch and nothing new from AMD we can expect NVIDIA to breach 90% of the desktop GPU market. That is annihilation territory for AMD GPUs, that is territory where they consider exiting the desktop consumer market and concentrate on consoles only. And what is AMD doing in response? Literally nothing.
See this is conjecture. Who said this? AMD isn't saying this, they're simply continuing development and they're not trying to keep pace with Nvidia because they know they can't.

Is AMD staring down the barrel? Is this really worse here than the years they were getting by on very low cashflow/margin products, pre-Ryzen? Are we really thinking they will destroy the one division that makes them a unique, synergistic player in the market?

There are a few indicators of markets moving.
- APUs are getting strong enough to run games proper, as gaming requirements are actually plateau-ing, you said it yourself, that 4060ti can even run 1440p. Does the PC market truly need discrete for a large segment of its gaming soon? Part of this key driver is also the PC handheld market, which AMD has captured admirably and IS devoting resources into.
- Their custom chip business line floats entirely on the presence and continued development of RDNA
- Their console business floats on continued development of RDNA - notably, sub high end, as those are the chips consoles want
- The endgame in PC gaming still floats on console ports before PC-first games at this point and with more cloudbased/unification between platforms, that won't get less, it will get more pronounced.
- AI will always move fastest on GPUs, another huge driver to keep RDNA.

Where is heavy RT in this outlook I wonder. I'm not seeing it. So Nvidia will command its little mountain of 'RT aficionado's on the PC', a dwindling discrete PC gaming market with a high cost of entry, and I think AMD will be fine selling vastly reduced numbers of GPU in that discrete PC segment because its just easy money alongside their other strategic business lines.

This whole thing wasn't new or hasn't changed since what, the first PS4.

AMD is fine, and I can totally see why they aren't moving. It would only introduce more risk for questionable gains, they can't just conjure up the technology to 'beat Nvidia' can they? Nvidia beats them at better integration of soft- and hardware.

Still, I see your other points about them and I understand why people are worried. But this isn't new to AMD. Its story of their life, and they're still here and their share gained 400% over the last five years.
Posted on Reply
#29
Chrispy_
SithaerCurious to see how the 5060/Ti maybe a 5070 will end up.
I have no upgrade plans left for this year but sometime next year I wouldn't mind to upgrade my GPU and thats the highest I'm willing to go/what my budget allows. 'those will be plenty expensive enough where I live even second hand..:shadedshu:'
4060Ti 16GB is a 1080p card in 2023. I bought one (needed the VRAM buffer for work) and dumped it into the second PC in the living room with a 4K TV. It can barely handle 1440p without performance nosediving because there's simply not enough bandwidth.

If they're going to keep it on a 128-bit bus, GDDR7 is maybe going to turn it into a 1440p card. At 448GB/s it's still 12% less bandwidth on paper than a vanilla 4070 which is okay at 1440p, but that's with lower-latency GDDR6. I'm not 100% sure you can just compare bandwidth between GDDR6 and GDDR7 because latency will have doubled, clock for clock - which means (only a guess here) that the 5060Ti will have 88% the bandwidth of a 4070 but ~50% higher latency. That's going to make it considerably handicapped compared to a 4070 overall, so I guess the rest of it is down to how well they've mitigated that shortcoming with better cache, more cache, and hopefully some lessons learned from the pointlessness of the 4060Ti.
Posted on Reply
#30
Vayra86
Chrispy_4060Ti 16GB is a 1080p card in 2023. I bought one (needed the VRAM buffer for work) and dumped it into the second PC in the living room with a 4K TV. It can barely handle 1440p without performance nosediving because there's simply not enough bandwidth.

If they're going to keep it on a 128-bit bus, GDDR7 is maybe going to turn it into a 1440p card. At 448GB/s it's still more than 12% less bandwidth than a vanilla 4070 which is a decent 1440p offering, but that's with lower-latency GDDR6, I'm not 100% sure bandwidth comparisons between GDDR6 and GDDR7 are possible because latency will have doubled, clock for clock - which means (only a guess) that the 5060Ti will have 88% the bandwith of a 4070 but ~50% higher latency.
They could fix the latency with cache
Posted on Reply
#31
dgianstefani
TPU Proofreader
the54thvoidTwo generation gap? For me the 2080ti to 4070ti was a 50% jump.

Settle for nothing less! :cool:
50% jump is great considering you went down in the stack by ~2 tiers and are only using ~30 W more power compared to FE 2080 Ti (4080 Ti doesn't exist, 4070 Ti S 4080 lite arguably a different tier than 4070 Ti).

I'm hoping two generations plus same tier or 1-2 tier up (5090/5090 Ti?) is enough to double performance.

Fingers crossed lol. If I do go 5090/Ti I'll likely keep it three generations to recoup the extra cost.
Vayra86They could fix the latency with cache
Maybe, still I think xx60 class cards will be native 1080/DLSS 1440 for at least this next gen.

Important to bear in mind 1080p on PC or 1440p DLSS arguably looks better than "native" 4K on console, which is realistically the competition at the entry level.

Native in quotes because consoles typically vary resolution and make heavy use of mediocre upscaling when playing at 4K, that or have a 30 FPS frame target which is pathetic.
Posted on Reply
#32
Raysterize
Here we go again...

Should be:

5090 - 512-bit 32GB <-- Needed for 4K Max settings in all games with 64GB being overkill.
5080 - 384-bit 24GB <-- 16GB is too little for something that will be around the power of a 4090.
5070 - 256-bit 16GB <-- Sweet spot for mid range.
5060 Ti - 192-bit 12GB <-- Would sell really well.
5060 - 128-bit 8GB <-- 8GB is fine if priced right...

And for the people slating AMD I had the ASUS 7900XTX TUF Gaming OC and it was incredible! Sure the street lights would flicker when I was 4K gaming but hey ho...
Posted on Reply
#33
Sithaer
Chrispy_4060Ti 16GB is a 1080p card in 2023. I bought one (needed the VRAM buffer for work) and dumped it into the second PC in the living room with a 4K TV. It can barely handle 1440p without performance nosediving because there's simply not enough bandwidth.

If they're going to keep it on a 128-bit bus, GDDR7 is maybe going to turn it into a 1440p card. At 448GB/s it's still 12% less bandwidth on paper than a vanilla 4070 which is okay at 1440p, but that's with lower-latency GDDR6. I'm not 100% sure you can just compare bandwidth between GDDR6 and GDDR7 because latency will have doubled, clock for clock - which means (only a guess here) that the 5060Ti will have 88% the bandwidth of a 4070 but ~50% higher latency. That's going to make it considerably handicapped compared to a 4070 overall, so I guess the rest of it is down to how well they've mitigated that shortcoming with better cache, more cache, and hopefully some lessons learned from the pointlessness of the 4060Ti.
I'm not planning to upgrade my resolution/monitor so I'm fine in that regard. :)
2560x1080 21:9 is somewhere between 1080p and 1440p based on my own testing over the years and most of the time I'm running out of raw GPU raster performance first when I crank up the settings at this resolution so I wouldn't exactly mind 12 GB Vram either but 16 is welcome if its not too overpriced. 'I'm also a constant user of DLSS whenever its a available in a game so that helps'
Tbh if the ~mid range 5000 serie fails to deliver in my budget range then I will just pick up a second hand 4070 Super and call it a day. 'plenty enough for my needs'
Posted on Reply
#34
Chrispy_
Vayra86They could fix the latency with cache
Yeah, that's what they said about Ada, and that didn't work - so I'll believe it when I see performance scaling without a huge nosedive!

Maybe a combination of refinements to the cache that they got wrong with Ada and the switch to GDDR7 will be enough. As always, it'll really just come down to what they're charging for it - the 4060Ti 16G would have be a fantastic $349 GPU but that's not what we got...
SithaerTbh if the ~mid range 5000 serie fails to deliver in my budget range then I will just pick up a second hand 4070 Super and call it a day. 'plenty enough for my needs'
If the major benefits to the 50-series are for AI, the 40-series will remain perfectly good for this generation of games.
Posted on Reply
#35
Durvelle27
AssimilatorI really wish NVIDIA had decided to increase the VRAM capacity and bus width over Ada. Not because more VRAM and a wider bus actually does anything for performance, but because it would at least stop Radeon fanboys crying about how NVIDIA is screwing buyers over. News flash, the 88% of people who own an NVIDIA GPU only feel screwed over by AMD's inability to compete.
Your post definitely smells of fanboying :wtf:

Which is so laughable considering AMD has no problem competing with Nvidias offerings outside of the RTX 4090

The RX 7900XTX Trades blows with the RTX 4080 Super mostly edging it out
The RX 7900XT beats the RTX 4070Ti Super
The RX 7900GRE Beats the RTX 4070 Super
The RX 7800XT Beats the RTX 4070
etc....

All while offering much better prices




Posted on Reply
#36
Daven
Durvelle27Your post definitely smells of fanboying :wtf:

Which is so laughable considering AMD has no problem competing with Nvidias offerings outside of the RTX 4090
Nvidia brand loyalists are fixated on three things:
  • RT
  • DLSS
  • The internet myth that AMD has fundamental driver problems and Nvidia doesn't
Outside of those three things, the GPU market looks very even and competitive with AMD doing slightly better in performance and price as you pointed out. But even if all three of my points above didn't exist, these loyalists would still buy Nvidia. But I appreciate you and everyone else doing what they can to prevent the blind fealty to one company that threatens to ruin our DIY PC building market that we love so much.
Posted on Reply
#37
Denver
What a monstrous difference from the largest chip to the level below. More than 2x bigger. :')
Posted on Reply
#38
Chomiq
Daven
  • The internet myth that AMD has fundamental driver problems and Nvidia doesn't
You'd be surprised how often have I heard "Aaaaand AMD display driver just crashed" from my buddy rocking a 6600 XT on a new AM5 system while playing the same game online.
Posted on Reply
#39
dgianstefani
TPU Proofreader
DenverWhat a monstrous difference from the largest chip to the level below. More than 2x bigger. :')
4090 wasn't fully enabled, not even close.
5090 probably won't be either.

These 100% enabled die numbers aren't representative of consumer cards, but Quadro ones.
Posted on Reply
#40
TheDeeGee
DavenAMD inability to compete is because no one will buy their chips even though they are very competitive against Nvidia's offerings. Luckily, you Assimilator has just volunteered to buy AMD as your next graphics card to help drive down Nvidia prices. I will join you and together we will show everyone that the only way to bring about a competitive market is for everyone to stop buying brand and gimmicks and start buying great performance per dollar tech regardless of what name is on the box.


I'll either be buying a 9950X3D and a Radeon 8900XTX for my next build or skip a generate and get Zen 6 and RDNA5. Since AMD is best for gaming in my opinion and will continue to focus equally between gaming and AI, my dollars will continue to go to them until Nvidia stops wasting resources on RT and AI.
Sucks to be you, but Path Tracing is the future of videogame lighting, even AMD will have to optimize for it.
Posted on Reply
#41
Durvelle27
DavenNvidia brand loyalists are fixated on three things:
  • RT
  • DLSS
  • The internet myth that AMD has fundamental driver problems and Nvidia doesn't
Outside of those three things, the GPU market looks very even and competitive with AMD doing slightly better in performance and price as you pointed out. But even if all three of my points above didn't exist, these loyalists would still buy Nvidia. But I appreciate you and everyone else doing what they can to prevent the blind fealty to one company that threatens to ruin our DIY PC building market that we love so much.
RT still isn’t viable as the performance hit it still to big without DLSS

DLSS is ok but so is FSR

And yea I hear that a lot. Which is funny because I’ve used AMD since the HD 4000 days and haven’t had driver issues since Hawaii. Which quite some time ago.
Posted on Reply
#42
Caring1
DenverMore than 2x bigger. :')
That's what she said
Posted on Reply
#43
Onasi
dgianstefani4090 wasn't fully enabled, not even close.
5090 probably won't be either.

These 100% enabled die numbers aren't representative of consumer cards, but Quadro ones.
That’s actually an important point that people seem to miss. If the chart turns out correct (and that’s a big IF), then I would wager that a full GB202 with 64 gigs will be the most expensive pro-card config. Said 64 gigs might not even be GDDR7, perhaps, we had the precedent with RTX6000 Ada using regular GDDR6 instead of 6X. Would be interesting to see if this go around the yields will actually be enough to create a fully enabled card. With AD102, there never WAS a full-chip card. And the 4090 was obvious dregs sold for a ton to consumers.
Posted on Reply
#44
hsew
Vayra86Yeah! More stagnation! Let's vote for stagnation!

I don't know what you're looking at, but I'm seeing a slight uptick per tier, with all things increased except the bus width. GDDR7 makes up for part of the deficit though so bandwidth won't be worse than Ada relatively, that's good. But capacity, still 12GB in the midrange and 8GB bottom end? You're saying this is a good thing, now? Ada is already bandwidth constrained at the lower tiers. Nvidia is trying real hard to keep those tiers to what, 1080p gaming?

To each their own, but I think in 2025 people would like to move on from 1080p. The 8GB tier is by then bottomline useless and relies mostly on cache; the 12GB tier can't ever become a real performance tier midrange for long, its worse than the position Ada's 12GBs are in today in terms of longevity. Sure, they'll be fine today and on release. But they're useless by or around 2026, much like the current crop of Ada 12GBs.

As for AMD's inability to compete... RT fools & money were parted here. AMD keeps pace just fine in actual gaming and raster perf and is/has been on many occasions cheaper. They compete better than they have done in the past. Customers just buy Nvidia, and if that makes them feel 'screwed over'... yeah... a token of the snowflake generation, that also doesn't vote and then wonders why the world's going to shit.

You can't fix stupidity. Apparently people love to watch in apathy as things escalate into dystopia, spending money as they go and selling off their autonomy one purchase and subscription at a time.
Personally I blame AMD for not being able to compete for so long in terms of perf/watt, software (read: following in nVidia’s footsteps), drivers, *compatibility with emerging technologies such as RT and AI especially* (call/cope it how some may) etc… Their recent move of leaving the high end to Nvidia was basically them admitting defeat and now prices are sky high. The fact of the matter is integrated graphics makes a dGPU a nonessential part of a system, and by that I mean since you technically aren’t forced to buy one in the same vein that you’re forced to buy DRAM (especially given that dGPUs are interchangeable, not being locked to a certain vendor like you would be with a CPU socket for example), you really have to sell a product based on merit more than anything.

And if anyone thinks AMD are innocent in all this, don’t forget, they launched their 7900XTX at $1,000. So they aren’t gonna save you either.
Posted on Reply
#45
dgianstefani
TPU Proofreader
OnasiThat’s actually an important point that people seem to miss. If the chart turns out correct (and that’s a big IF), then I would wager that a full GB202 with 64 gigs will be the most expensive pro-card config. Said 64 gigs might not even be GDDR7, perhaps, we had the precedent with RTX6000 Ada using regular GDDR6 instead of 6X. Would be interesting to see if this go around the yields will actually be enough to create a fully enabled card. With AD102, there never WAS a full-chip card. And the 4090 was obvious dregs sold for a ton to consumers.
Yeah and 4090 Ti was cancelled likely because no competition for 4090. With RDNA4 supposedly being 7900XTX at 7800XT prices I doubt the full die 5090/Ti is needed either.

Why sell 90-100% enabled dies to consumers when you can sell them for 2-3x the price as Quadro cards anyway?
Posted on Reply
#46
Vayra86
hsewPersonally I blame AMD for not being able to compete for so long in terms of perf/watt, software (read: following in nVidia’s footsteps), drivers, *compatibility with emerging technologies such as RT and AI especially* (call/cope it how some may) etc… Their recent move of leaving the high end to Nvidia was basically them admitting defeat and now prices are sky high. The fact of the matter is integrated graphics makes a dGPU a nonessential part of a system, and by that I mean since you technically aren’t forced to buy one in the same vein that you’re forced to buy DRAM (especially given that dGPUs are interchangeable, not being locked to a certain vendor like you would be with a CPU socket for example), you really have to sell a product based on merit more than anything.

And if anyone thinks AMD are innocent in all this, don’t forget, they launched their 7900XTX at $1,000. So they aren’t gonna save you either.
The prices were sky high before 'AMD admitted defeat'. It has had zero impact - Nvidia released SUPER cards with a better perf/$ at somewhere around the same time period. Let's also not forget that AMD's RDNA3 price points were too high to begin with, so even their market presence hasn't had any impact on pricing. They happily priced up alongside Nvidia. It wasn't until the 7900GRE and 7800XT that things got somewhat sensible, and competitive versus the EOL RDNA2 offerings, which were also priced high in tandem with Ampere and lowered very late in the cycle.

The real facts are that no matter what AMD has done in the past, their PC discrete share is dropping. They're just not consistent enough and this echoes in consumer sentiment. Its also clear they've adopted a different strategy and are betting on different horses for quite a while now.

There is nothing new here with RDNA3 or RDNA4 in terms of market movement. Granted - RDNA3 didn't turn out as expected, but what if it did score higher on raster? Would that change the world?
Posted on Reply
#47
Assimilator
Vayra86Is AMD staring down the barrel? Is this really worse here than the years they were getting by on very low cashflow/margin products, pre-Ryzen? Are we really thinking they will destroy the one division that makes them a unique, synergistic player in the market?
Yes, it has literally never been worse for their GPU division than today. Until Q3 2022 AMD had rarely dropped below 20% marketshare and when they did they pulled back above that level within maximum 2 quarters... since then they have had 7 consecutive quarters below that threshold. That's nearly 2 years of failure to not just gain, but hold marketshare. That's staring down the barrel.

Durvelle27Your post definitely smells of fanboying :wtf:

Which is so laughable considering AMD has no problem competing with Nvidias offerings outside of the RTX 4090

The RX 7900XTX Trades blows with the RTX 4080 Super mostly edging it out
The RX 7900XT beats the RTX 4070Ti Super
The RX 7900GRE Beats the RTX 4070 Super
The RX 7800XT Beats the RTX 4070
etc....

All while offering much better prices




Thanks for demonstrating exactly the same failure of understanding that I documented for AMD's marketing department in my post.
Posted on Reply
#48
Vayra86
AssimilatorYes, it has literally never been worse for their GPU division than today. Until Q3 2022 AMD had rarely dropped below 20% marketshare and when they did they pulled back above that level within maximum 2 quarters... since then they have had 7 consecutive quarters below that threshold. That's nearly 2 years of failure to not just gain, but hold marketshare. That's staring down the barrel.





Thanks for demonstrating exactly the same failure of understanding that I documented for AMD's marketing department in my post.
If we're looking at trends (I don't deny their share is lowest of all time, mind)...

2015: 18%
2019: 18.8%
2020H2: 18%
2022: 10%
2023Q4: 19%

They've been 'rock bottom' many times before. And if you draw a line over this graph, isn't this just the continuation of the trend of the last decade?

TheDeeGeeSucks to be you, but Path Tracing is the future of videogame lighting, even AMD will have to optimize for it.
Oh? I must have missed that statement after Cyberpunk ran at sub 30 FPS on a 4090.

I think it mostly sucks for people who expect Path Tracing to be the norm. They're gonna be waiting and getting disappointed for a loooong time. Game graphics haven't stopped moving forward despite Path Tracing. Gonna be fun :)
Posted on Reply
#49
Prima.Vera
RavenmasterWhere's the 384-bit model with 24GB GDDR7 though? Seems like a big gap between the top model and the next one down
That's going to be next year's Super Titanium Ultra Max Plus Extreme GPU releases.
Please stand by.
Posted on Reply
#50
Tigerfox
Vayra862015: 18%
2019: 18.8%
2020H2: 18%
2022: 10%
2023Q4: 19%
As @Assimilator said, that was always only one or two quarters in a row (Q2-Q3 2015, Q4 2018, not 2019, Q4 2020) and then since Q3 2022 the last seven quarters in a row.

What is it with that random line that isn't even a real line? Did you just fail at drawing a straight line from the first to thal last shown quarter or did you connect random quarters on purose?
Posted on Reply
Add your own comment
Nov 22nd, 2024 03:53 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts