I haven't seen anyone talking about the RX 7900 XT.
Actually, from the RX 6000 and RX 7000, it is the only 7000 variant worth buying today, along with (maybe) RX 6650 XT, RX 6700 XT and RX 6800 (XT).
Of course, it could have been better if it had been a monolithic GPU - more performance against the chiplets method, plus those missing 10% performance up to the original performance target for Navi 31.
I wonder why AMD doesn't release an improved or fixed card.
Still too damned expensive....regardless of mfgr, model, version etc....
What we need ATM are well-rounded, well-spec'd cards that can do 99% of what we need them to, AND are affordable for the average everyday user, including gamrs, CAD folk, the Blender crowd etc....
Over here only one 4070 Ti dropped down to 759€ (from a eBay dealer). The rest all start from 800€ and up. The 7900 XT on the other hand starts at 759€ (from credible sellers).
It's not like you can't Blender with a AMD card, it just takes a little bit longer. Which only matters if you have to render regularly, if you render a video from time to time it shouldn't be a big deal. Blender is also pretty "cherry picking" since AMD is doing pretty good at the other productive tasks.
Also you can't blame AMD entirely for the bad Blender performance, as the following article shows:
Blender expands AMD GPU support with HIP-RT integration. What is HIP-RT, and how much does it impreove rendering times?
www.pugetsystems.com
Quote: "For years, AMD users have eagerly waited for Blender to tap into their hardware’s ray-tracing capabilities fully. Blender officially added AMD HIP support with the 3.0 release in December 2021. However, this did not take advantage of the dedicated ray tracing cores available in the Radeon 6000 and 7000 series GPUs. The wait is finally over, as the latest Blender 3.6 update officially enables support for AMD ray tracing cores. This enhancement promises to significantly accelerate the rendering process, showcasing the potent synergy between AMD hardware and Blender’s advanced rendering algorithms. We’ll delve into the impact of this update and how it promises to improve rendering workflows.
Blender’s decision to enable AMD ray tracing cores marks a pivotal moment in the world of 3D rendering. This follows Maxon’s recent inclusion of HIP in their Redshift renderer. We are increasingly seeing AMD looking to professional workflows with their video cards. They still aren’t entirely competitive with NVIDIA, but this comes as a warning shot. AMD is taking GPU rendering seriously, and if they are able to make the same sort of improvements as they did with CPUs when they introduced the Ryzen line, 3D artists stand to win. We are excited to see what the future holds for GPU rendering."
Are you trying your hardest to avoid the point I'm making or are you reading past it?
There is a 4060ti with 16GB.
There are midrange Ampere cards with 16GB.
These cards have no business in gaming whatsoever. There's nil advantage to them over their half VRAM counterparts (perhaps situationally, but that won't last), especially the 4060ti.
That's them catering to that exact demand right there but on a much more 'democratic' price level. Still overpriced for what it really is. But. An Nvidia card with 16GB on the newest architecture. It can do RT. It has AI. It has creator tools. It has everything your little gamur heart wants. Yadayada. You get the gist ?
I'm not telling you this makes sense in any kind of realistic economy or for real pro markets. But it makes sense in the hearts and minds of prospective buyers. Young people with little knowledge of what they might do or can do with that GPU perhaps. Somewhat more knowledgeable people that know how to put VRAM to use. Etc. There's a market here. Niche? I'm not so sure. I think a lot of people are sensitive to this.
I can't even deny I was totally insensitive to this, say for example a feature like Ansel. It's not like I would have picked Pascal over any other GPU at the time for it. But still. Its yet another neat little tool you can use, and I've pulled some pretty nifty screens for my desktop from it. All these little things really do matter. If you buy an Nvidia GPU, you get a package of added value that AMD simply cannot match. I'm now past the point of caring too much about all of that, but its a package nonetheless.
I'll be honest, all of the choices we've really had this gen, still, are less than ideal. Palatable, at this point, is the furthest I would go, and that only counts for a small selection of GPUs on either side of the line. And you're right... almost normal price. Almost. But its also nearly Q2 2024.
True that bit its getting better by the month. we are still feeling the aftermath of mining and scalping this will take years. But consumers are also stupid as shit accepting a gaming card for 1600€ is insanity and calling it a good deal while it has objectively the worst value.
The funny thing is the 4090 is so cut down it would barely make an actual 80ti card.
but hey people bought the freaking 3090 for double the price of a 3080 while being 10% faster. stupidity has no bounds especially gamers as you can see on the gaming industry.
Nvidia lied when they said 90 card will replace titans . Titan rtx birns the 3090 in some productivity tasks because it was areal titan. not a wannabe so they can raise the prices by 2. and people eating it up
Sweet. Seems AMD is too slow and lags in the game nVidia runs. How much it will take, until nV will do another punch? Make note, nVidia didn't even had to lower the prices. They could keep simple keep gouging the market, or even raise the MSRPs. AMD is being beaten with their own methods.
However, the availability of the 4070Ti and other nVidia cards, is another question.
That’s a pretty big “unless”. I love how a lot of people just downplay that Radeon are absolute cheeks in anything apart from gaming. If you do, for example, any amount of work in Blender you basically have no actual choice. NV is the only game in town. The fact that AMD still hasn’t even tried to compete with OptiX is pathetic, to be blunt. How they think that they can just keep their back turned while the competition reaps the rewards is baffling to me.
Youre right here. However, some time ago, the same sentences were shot towards GCN and Vega achitectures, which even being the low end gaming were compute monsters. Contrary to nVidia counterparts, which with the exception of very high end, were completely anemic for the said tasks. Now the tables have been turned, but the narratives left the same.
You don't understand how this works, neither Nvidia or AMD care about regular consumers and professional workflows, it's not relevant for that segment. Nvidia cards perform better because of Quadros, since they share the architecture but Nvidia wants people to buy Quadro not Geforce, that's why they more than once went of their way to cripple productivity performance on those cards and limit the VRAM. Those market segments are distinct, regular consumers do not care about professional workflows, you are just wrong.
Not to mention Nvidia is using CUDA to gatekeep this market, AMD could make a GPU a billion times faster and it still wouldn't matter. They'd be absolute idiots to try and focus on this one thing that simply does not even matter, it's not going to get them more market share.
I don't try to attack you, but just some points to note.
Everyone knows, that Nvidia gatekeeps the market with CUDA, and does the really dirty things towards their fans, consumers, heck, even towards their precious clients and partners. They shit on absolutely everyone. Thats the fact. nVidia is anti-consumer and pro-investor trillion bucks worth corporation. And it growth like the mushrooms after the rain.
But at same time, what prevents AMD, to overtake the situation, and provide "morally" correct, OpenSource alternative to the CUDA, as well as countless of other confortable tool sets? To stop this vicious circle, to disrupt the monopoly. But AMD doesn't fight that, and instead joins this game, and might be in collusion with Nvidia. Anybody can point out the disgusting tactics, Nvidia wages, and how locked is their proprietary ecosystem. But it worth a credit to their many endeavours, and many SDKs they open to developers, excluding direct "incentives". There's no need to bribe game developers, as most already make games in regard with consoles, which carry Zen2 and RDNA2. What is needed is to help, support developers, make the process as easy as possible, so the devs won't evn care about nVidias fat suitcases.
Again, why AMD can't make invest into own viable, effective and comfortable and quality ecosystem? Pproprietary or not. What prevents AMD to do so, except their greed. At this point, it looks like AMD is the laziest company, as they sit on the laurels of EPYC/Ryzen, and gouging them as much as possible, and just ocasionally respond to the rivals. And they use OpenSource banner of their stuff, just to offload the development on the shoulders of clients and community.
Every couple months of every release since Zen2 is almost double over MSRP milking of the trustful consumers. And only when Intel or nVidia, try to undercut them, they pring down the prices, to more sane level (still not sane). And that considering, that AMDs chiplet way, is miles cheaper, that intel's and Nvidia big monolitic dies. They still cost same or more, regardless. Even considering the Nvidia's premium tax, this looks as scam. For years AMD was pouring into everyones ears, that chiplet strategy would become both energy efficient and cost effective, and will bring their product prices down by alot.
Also It took AMD almost two decades, to roll out ROCm. And it's only to accompany it with their MI200/300. This shows AMD invested into it only to rival in the Nvidia's race for AI, and also wanted to take a piece of that profitable pie. And it still isn't complete alternative to CUDA.
And make no mistake, consumer VGAs that nVidia sells by miles more to their "dumb" fans, are still supported with both gimmics and really strong features. The gimmics are impotent RTRT, that doesn't work without "deblurers" and fake frames. The PhysX, Hairworks, Gsync, etc. And the strong point is their encoding. Like it was mentioned, everyone is able to become a streamer and youtuber, and even mid range cards from both nVidia and intel, provide the vastly better performance than even top AMD cards are unable to reach. AMD simply has no alternative. The CUDA is just a small bonus in this perspective of regular consumer.
Now again, what was the reason, AMD did not make any moves toward rising their marketshare, and market penetration for their own products. Why AMD doesn't fight for their share like nVidia with their somewhat overestimated and to some extent BS producs? AMD is not the underdog they were years ago. They have wery big profit margins, and tons of cash to fund any software and hardware development. There's no one who can say that AMD is poor company. At same time they behave like they are market leaders or monopolies on every front, and they don't have to do anything anymore.
Look, once nVidia, had less than 50% of market share, while having inferior products. They did invest in their marketing, and R&D, even while using anticonsumer tactis. Why AMD just sit and wait while the market gonna come to their hands, without even trying.
Yes, APUs are great. This is the way, the absolute majority of desktops should be, and be that powerefficient. The exceptions are demanding CAD/Rendering and scientific use cases. However, as you've said yourself, these are not the tasks for the ordinary users. Which doen't need the ultra hiend dGPU to run the games. Majority of gamers already use medium or even lower end class GPUs. What is needed is more powerful iGPU, that is capable of High settings with 1440P. And at this pace it's not really far from reality.
But that merrit is not due to sheer AMDs generosity. This is the result, of AMD has the stock of unsold "bad" binned mobile chips, that are not capable of mobile use.
Same goes to Ryzen. Despite how amazing it is, that's just literally the bottom of the binning. The absolute leftovers, that were not suitable firstly for EPYC, and then Threadripper. And even then, AMD managed to go further and cut down many featurs for desktop users, that came with Zen chips for free, as they were already there (people for years mocked intel for the same stinky tricks).
And even worse, they pulled the "worst intel", and started to artificially limit and fragment the chipsets and motheboard capabilities. They let partners to rogue with their BIOS settings, and thus damage the image of Ryzen. They had RAM problems during AM5 launch, and recently the STAPm fail. The QC is nonexistant. Their motherboards cost more than intel ones, while missing many absolute necessary and basic features. Again due to partners being loose. All of this within one socket/platform launch. This is disater. Intel has been burned for this crap for decades. Now the tables turned. But it seems AMD dosn't draw any conclusions.
Why this matters, is because such incompetent behavior is dangerous not only for AMD itself, but to entire market. Loosing one participant due to it's reckless moves, and the market would collapse. The next RTXxx50 would cost a grand, if will be at all. Every consumer, buyer needs a competition. It's impossible, when the partaker already gave up.
And all it took for price competition to restart was for NVIDIA to release products that compete with its own products, while AMD sits in the corner and sucks its thumb.
Uh yeah, do you know why companies optimise their software for NVIDIA and not AMD? Because NVIDIA pays them to and AMD does not. Because NVIDIA understands that the ROI on that tiny expense is going to be many times greater. This is business 101, yet AMD perpetually fails to understand this.
Indeed. This is almost like the Buldozer vs Sandy Bridge drama all over again. When intel was competing itself for almost eight years. AMD needs to roll out their "Zen" of GPU, or they will loose the consumer gaming market completely. Intel is already reached the marketshare, that AMD were gaining for decade,. With just being couple of years present on the market, and even having their Xe failed launch. What AMD is going to do, when Battlemage wll happen. I bet intel doesn't sit their idling on their arse.
The problem is, that even considering very limited data from Steam HW survey, it shows that 4060 still has about the same share, as HD5450. What that means, is that the GPU pricing and positioning is utter sh*t. Be it reasonable, the share would surpass the 1060 6GB in a nick of time. As much as it isn't really worthy to upgrade over 3060Ti.
What they did with PhysX, where they where offering a software support to non Nvidia systems that was hilariously slow to force people to buy Nvidia GPUs. Hardware PhysX died in the end, but CUDA is a different beast.
Nvidia knows how to create the illusion of being open, while driving consumers to it's proprietary options.
AFAIK, PhysX still relied on CUDA. And it still used CPU heavily. Much like encoding/decoding due to VRAM compression operations. It still was a CPU tech, but artifficaly locked behind proprietary GPU.
Another question, where it is possible to trick games to use Radeons, as there's no way they can't run such a basic task?
I have a question on the Steam Hardware chart. In the GPU section it shows that the 3060 laptop has increased in January. The only issue with that is that 3060 laptop GPU has not been available since 2022. How could that be?
That might be those poor laptops, that these sweatshops and cafés were running the Etherium and other crypto garbage. What nobody tells, is where have gone all the storage that been used for Chia mining?
Are you trying your hardest to avoid the point I'm making or are you reading past it?
There is a 4060ti with 16GB.
There are midrange Ampere cards with 16GB.
These cards have no business in gaming whatsoever. There's nil advantage to them over their half VRAM counterparts (perhaps situationally, but that won't last), especially the 4060ti.
That's them catering to that exact demand right there but on a much more 'democratic' price level. Still overpriced for what it really is. But. An Nvidia card with 16GB on the newest architecture. It can do RT. It has AI. It has creator tools. It has everything your little gamur heart wants. Yadayada. You get the gist ?
I'm not telling you this makes sense in any kind of realistic economy or for real pro markets. But it makes sense in the hearts and minds of prospective buyers. Young people with little knowledge of what they might do or can do with that GPU perhaps. Somewhat more knowledgeable people that know how to put VRAM to use. Etc. There's a market here. Niche? I'm not so sure. I think a lot of people are sensitive to this.
I can't even deny I was totally insensitive to this, say for example a feature like Ansel. It's not like I would have picked Pascal over any other GPU at the time for it. But still. Its yet another neat little tool you can use, and I've pulled some pretty nifty screens for my desktop from it. All these little things really do matter. If you buy an Nvidia GPU, you get a package of added value that AMD simply cannot match. I'm now past the point of caring too much about all of that, but its a package nonetheless.
Chia mining didn't use lots of storage, it' used up lots of storage. Expected lifespans of TLC SSDs was 90-days of Chia mining per TB of capacity. I assume QLC drives didn't even last long enough to be worth bothering with. For a mechanical drive of any capacity to survive more than 6 months was also an outlier, apparently, with death usually at the 3-5 month mark.
Even as someone who mined and holds crypto, I couldn't see the point of Chia, and I'm not really sure I see the point of Bitcoin mining. Digital, nation-independent, DeFi is the future, and Bitcoin started that, but we don't need to mine it wastefully. A successful independent DeFi doesn't have to generate 90 Mt of CO2 a year for no justifiable reason.
The narrow bus is exactly what the 4060Ti needs. My own personal 4060Ti is undervolted and underclocked to a 125W power draw but even hamstrung like that and rendering at just 1080p I'll run into situations where the overlay says it's not fully loaded and neither is any single CPU core. That's either memory bandwidth or game engine bottlenecking, and I know it's not game engine because the same scene runs at 100% GPU usage on the 4070 or 7800XT.
It's also ROP-limited, so resolution scaling on the 4060Ti is pathetic compared to the 3060Ti but since the bandwidth bottleneck is so great, we don't really get to see the ROP limitation. For the 4060Ti to be a better 1440p card, it would have mostly needed more bandwidth, but also that would just revealed the ROP deficiency, which is more situational but still an issue holding it back.
Sadly, if you head to Wikipedia and look at the one-spec-sheet-to-rule-them-all, you can see how the 4060Ti is really a successor to the 3060 8GB in terms of bandwidth, SM+GPC counts, and relative position of that silicon in Nvidia's range of GPU dies. It's a long way off the 3060Ti and the only reason it gets close is because TSMC 4N lets Nvidia clock it 55% higher than the 3060Ti on Samsungs underwhelming 8nm node.
And you've enjoyed 15 months of games at 4K from 2022 to 2023 titles I presume?
TLoU-P1, CP2077, Hogwarts, MS Fligh Sim all exceed 12GB at 4K on max settings. You'll notice it uncapped because it manifests initially as microstuttering and frame-pacing issues but realistically at those settings (expecially overdrive in CP2077) you're unlikely to be getting much more than 60fps in heavy scenes anyway, so artificially hindering the stuttering/pacing with 60Hz cap means it's less of an issue in those older 2022/2023 titles. Realistically, the issue with the 4070Ti isn't it's performance over the past 15 months, it's how it's going to perform in the next 15 months now that so many more games in the development pipeline are moving to UE5 and also ditching any semblence of PS4 and XB1-compatibility now that those consoles have been dropped for good.
The 4070 Ti isn't a bad card. It's objectively better than the 4070 and 4070S, both of which are considered "decent" cards. Everyone talks shit about the 4070Ti because of the asking price, and the sheer hubris/cheek/greed of Nvidia in trying to launch it at $900 as the 4080 12GB.
If you fall into the trap of comparing its price to other 40-series cards on pricing, you'll end up drinking the Nvidia kool-aid and justifying the cost relative to the 4080 which was just ridiculously poor value, but the reality at launch was that the $800 4070 Ti was bringing 3080 12GB/ 3080 Ti levels of performance and VRAM to the table at the exact same performance/$ point for new retail cards at that time. It wasn't dramatically faster than the 6950XT (another chart in the same HUB Jan '23 update) which was selling for $649, not that it had the same feature set.
Hardware Unboxed has been doing these monthly GPU pricing updates for a few years now, and around the 4070 Ti's launch it's clear to see why there was so much hate for the card and it's all because of the price. The only reason you can say it's a decent card is because you bought it at a deep discount which means you weren't actually price-scalped by Nvidia.
There wasn't. What people where doing was running patches that where defeating Nvidia's lock. There was also an Nvidia driver (I think 256.something) where they had "forgotten" to implement the lock. That driver was working fine with Radeons as primary cards without wanting any patches. Nvida's excuse was that they could not guarranty CUDA and PhysX stabillity and performance if the main GPU was not an Nvidia one, which was a HUGE pile of BS. So people had to use software PhysX that was totally UNoptimised. Somehere in 2014 I think, when hardware PhysX was already dead, I think they removed that lock. In my second system I have a GT 710 as a second card for PhysX and CUDA (well more accurate is "for wasting power and occupying one PCIe slot") that runs fine CUDA and PhysX without needing any kind of patch to activate those.
Nvidia's lock was so anticonsumer that even someone who had payed full price for an Nvidia card to use it for CUDA or PhysX, while also using a higher end/newer AMD card as main GPU, couldn't, because Nvidia was forcing the option the Nvidia card to be primary. So, If I had for example a GeForce 9800GT and was buying at a latter time an HD 4870 to use as my primary 3D card, Nvidia was punishing me for not being loyal by disabling CUDA and PhysX. That was a very sh__y business practice from a company that had 60% of the market back then, less of a billion income per quarter and less support from the public and press. Imagine them today having 80%+ of the market, billions of income every quarter and total acceptance from public and support from tech press, how they move in the background. And people expect AMD to offer super competitive options and not lose money in the process for no results.
AFAIK, PhysX still relied on CUDA. And it still used CPU heavily. Much like encoding/decoding due to VRAM compression operations. It still was a CPU tech, but artifficaly locked behind proprietary GPU.
NVIDIA's new GeForce RTX 4080 Super introduces a noteworthy $200 price reduction compared to the non-Super 4080, placing significant pricing pressure on AMD's RX 7900 XTX. Despite this, the performance gains vs RTX 4080 non-Super are only marginal, we expected more.
NVIDIA's new GeForce RTX 4080 Super introduces a noteworthy $200 price reduction compared to the non-Super 4080, placing significant pricing pressure on AMD's RX 7900 XTX. Despite this, the performance gains vs RTX 4080 non-Super are only marginal, we expected more.
1. VRAM usage and VRAM allocation aren't the same thing. A lot of games allocate more than 12 GB VRAM if available, but run fine on an 8 GB card. Some of them have texture and asset loading issues if there isn't enough VRAM, but the FPS looks fine. There is no blanket statement here, unfortunately.
2. Not everyone plays in 4K. Up to 1440p, 12 GB is still fine, in my opinion. How fine it will be in the near future when the PS5 Pro is out and new games get developed for it, we'll see.
NVIDIA's new GeForce RTX 4080 Super introduces a noteworthy $200 price reduction compared to the non-Super 4080, placing significant pricing pressure on AMD's RX 7900 XTX. Despite this, the performance gains vs RTX 4080 non-Super are only marginal, we expected more.
Last of Us and Hogwarts are shitty console ports that make no effort to manage resources. That's not NVIDIA's fault yet you're blaming NVIDIA. No logic.
NVIDIA has never positioned the 4070 series or lower as 4K cards. Therefore, if you run 4K on anything lower than a 4080 (which has perfectly sufficient memory for that resolution because it's designed for it) and complain about the experience, It's really simple, if you want to run games at 4K, buy the GPU designed for 4K. Can't believe I have to explain this, but here we are.
VRAM usage and VRAM allocation aren't the same thing. A lot of games allocate more than 12 GB VRAM if available, but run fine on an 8 GB card. Some of them have texture and asset loading issues if there isn't enough VRAM, but the FPS looks fine. There is no blanket statement here, unfortunately.
There wasn't. What people where doing was running patches that where defeating Nvidia's lock. There was also an Nvidia driver (I think 256.something) where they had "forgotten" to implement the lock. That driver was working fine with Radeons as primary cards without wanting any patches. Nvida's excuse was that they could not guarranty CUDA and PhysX stabillity and performance if the main GPU was not an Nvidia one, which was a HUGE pile of BS. So people had to use software PhysX that was totally UNoptimised. Somehere in 2014 I think, when hardware PhysX was already dead, I think they removed that lock. In my second system I have a GT 710 as a second card for PhysX and CUDA (well more accurate is "for wasting power and occupying one PCIe slot") that runs fine CUDA and PhysX without needing any kind of patch to activate those.
Nvidia's lock was so anticonsumer that even someone who had payed full price for an Nvidia card to use it for CUDA or PhysX, while also using a higher end/newer AMD card as main GPU, couldn't, because Nvidia was forcing the option the Nvidia card to be primary. So, If I had for example a GeForce 9800GT and was buying at a latter time an HD 4870 to use as my primary 3D card, Nvidia was punishing me for not being loyal by disabling CUDA and PhysX. That was a very sh__y business practice from a company that had 60% of the market back then, less of a billion income per quarter and less support from the public and press. Imagine them today having 80%+ of the market, billions of income every quarter and total acceptance from public and support from tech press, how they move in the background. And people expect AMD to offer super competitive options and not lose money in the process for no results.
When running on the GPU, CPU wasn't doing much. When running on the CPU everything bad was happening, like CPU maxing and FPS becoming a slideshow.
I was one of the "fools" that bought an Nvidia card for Physx, only for them to have my card not be detected by Windows of all things. It was dastardly because the card would work fine without an AMD card.
As someone who played at 1080p with a 2070, then a 6750 XT, then a 7800 XT for a while before I upgraded to UW 1440p, I strongly disagree.
If you only want to play current games and swap out your GPU with every new generation, then go ahead, but I do think that having some reserve potential in your system for future games isn't a bad idea.
Especially when you've earlier said that ^^^^
So, how much would you be willing to spend on that 12 GB card and what are you going to use it for if you did spend?
Especially when you've earlier said that ^^^^
So, how much would you be willing to spend on that 12 GB card and what are you going to use it for if you did spend?
What's your point? All I'm saying is, if someone - I, was happy with mid-range GPUs at 1080p for a long time, then I don't see why buying a high-end one for anything less than 4K would be a bad idea.
True that bit its getting better by the month. we are still feeling the aftermath of mining and scalping this will take years. But consumers are also stupid as shit accepting a gaming card for 1600€ is insanity and calling it a good deal while it has objectively the worst value.
The funny thing is the 4090 is so cut down it would barely make an actual 80ti card.
but hey people bought the freaking 3090 for double the price of a 3080 while being 10% faster. stupidity has no bounds especially gamers as you can see on the gaming industry.
Nvidia lied when they said 90 card will replace titans . Titan rtx birns the 3090 in some productivity tasks because it was areal titan. not a wannabe so they can raise the prices by 2. and people eating it up
But maybe we who bought 3090 are so smart we can make money almost thin air.. or lucky ebought that we got the money to spend and buy 3090.
u can buy lambo or Toyota..
but even Toyota oweners can buy 3090..
i just want to say, 3090 is not smart buy but many can buy it whitout selling kidney.
i hope i can buy better enghlist writing talets.. but i cant so i buy best GPU money can buy.