- Joined
- Jan 2, 2024
- Messages
- 754 (1.82/day)
- Location
- Seattle
System Name | DevKit |
---|---|
Processor | AMD Ryzen 5 3600 ↗4.0GHz |
Motherboard | Asus TUF Gaming X570-Plus WiFi |
Cooling | Koolance CPU-300-H06, Koolance GPU-180-L06, SC800 Pump |
Memory | 4x16GB Ballistix 3200MT/s ↗3800 |
Video Card(s) | PowerColor RX 580 Red Devil 8GB ↗1380MHz ↘1105mV, PowerColor RX 7900 XT Hellhound 20GB |
Storage | 240GB Corsair MP510, 120GB KingDian S280 |
Display(s) | Nixeus VUE-24 (1080p144) |
Case | Koolance PC2-601BLW + Koolance EHX1020CUV Radiator Kit |
Audio Device(s) | Oculus CV-1 |
Power Supply | Antec Earthwatts EA-750 Semi-Modular |
Mouse | Easterntimes Tech X-08, Zelotes C-12 |
Keyboard | Logitech 106-key, Romoral 15-Key Macro, Royal Kludge RK84 |
VR HMD | Oculus CV-1 |
Software | Windows 10 Pro Workstation, VMware Workstation 16 Pro, MS SQL Server 2016, Fan Control v120, Blender |
Benchmark Scores | Cinebench R15: 1590cb Cinebench R20: 3530cb (7.83x451cb) CPU-Z 17.01.64: 481.2/3896.8 VRMark: 8009 |
I'm not sure how to preface this other than my experience between bargain and enthusiast/flagship cards tells me nothing is safe anymore.
From the Super 7 era I've been an ATI guy and the Rage XL/Mach64 was kind of the default on everything. You'll still find these chips on servers with a VGA out.
Great for movies and content but gaming suffered. Fair Direct2D performance on early Direct3D hardware. I solved low FPS issues by switching to a Rage Magnum/Xpert 2000.
No idea if it was the pixel shaders, bandwidth, the insane jump from 8MB SDR->32MB DDR, or whatever, it worked. That experience solidified what I look for in upgrades.
Feature level lockout has never been an issue or concern until I switched up to a Radeon 9200 to hasten a Pentium 4 build. I eventually swapped in an AH3450 to deal with it.
Things were fine until I lost integrity with my main system of the time, which featured a similar HD3300 IGP. Changed boards and switched up to a HD6570.
This was around the era when the nuisance of Bitcoin mining started to boom. This is also where I finally made the jump to PCI-E cards. Note the features.
The feature level is DX11_0 and that was the selling point. Reasonable performance and games would get bogged hard where there's a bunch of stuff on screen.
I'm talking massive sprite count of recruits in Kingdoms or a bunch of enemies/effects in Vampire Survivors, HoloCure and anything high particle or poly.
This was also the first point where I started experiencing massive audio driven STUTTERS in desktop mode VR applications that start chug swapping video memory.
Effects like cameras and mirrors would also cause hard locks when looking at them too quickly. Just all around troublesome at the turn of the current era.
So I picked up a RX 580 Red Devil. Took a few months to get one during all the ETH mining noise.
All the missing features I cared about, unlocked. All the render issues, low FPS audio stutters and everything else vanished. Finally ran games at 1080p144.
I was able to encode with this card too. Rendering jobs finally worked right. Kind of unreasonably well. Vulkan stuff is still lethal but UnityVR handles great.
This is a pressing point for me because gaming performance is falling behind again and content creation suffers so badly. I can usually play 1080p60 but encode 720p60 or 1080p30.
Neither options look good and at some point I need to guarantee 1080p60. That is never going to happen with this card or any nearby generation in AVC or HEVC.
Which brings me to the 7900XT:
At the surface it's not great. Feature level is DX12_2 which means it runs anything. OpenCL gets an update and there's a dedicated encoder for AVC1 but that's it.
AI toolkits work, a new generation of GDDR memory and more than double the size of what I consistently fall back to because I can't get my hands on a card that works.
There is a 5 year gap between these cards. Since October I've tried getting a Hellhound but from January it's been back and forth. 8 months in and I can't make it happen.
Yesterday a very stupid opportunity appeared due to what seems to be tons of cards getting dumped right back onto the market: Highly binned 5700XT and 6900XT cards.
I'm not familiar with how either one performs. I'm under the impression that each model is well over my 580 for gaming performance but not much else.
Either way I'd lose AV1 encode, which is the selling point for 7000 series and minor improvements to AVC/HEVC. Odds and ends are probably AI related and I don't care.
What appears to be the reasonable solution here? There is a tech principle where you stop work or delay a project because newer technologies will have greater impact.
AMD is in the middle of that with RDNA4 and pouring massive time and resources into RDNA5. Not sure if I expect better options from either one.
I'm already finding situations where OLD decodes get cut in favor of H.264 and that's fine but everything I've witnessed about GPU improvement is centered around antiques.
There's some possibility the next gen gives us massive upper scale AVIF/AV2 support that isn't immediately useful then minimal or no improvement where currently needed.
If you wonder why I favor the 7900XT for 1080p over others, I have jobs that call for that level of performance plus I don't care to return to this issue for another 8 years.
Plus with platform promises floating around about better encoder support on the table, it really makes sense that there is importance in NOT missing that milestone.
It seems to be a very difficult spot to be a creator with this kind of hardware or requirements. That's what I get out of my own situation and it doesn't look better for nVidia.
Two hours ago I watched one of my favorite streamers have a meltdown for about an hour because a horribly unoptimized game tanks the frames and locks her computer.
All while struggling to get something else to work after the fact. Others with 4070Ti and upper accelerators noted similar issues when she looked it up.
Like, there just doesn't seem to be any reasonable expectation to do anything. If any of this makes any sense to you guys, thanks for taking the time to read through it all.
Is there anything that fits? Like did I miss something important? Am I in the wrong part of the GPU market?
From the Super 7 era I've been an ATI guy and the Rage XL/Mach64 was kind of the default on everything. You'll still find these chips on servers with a VGA out.
Great for movies and content but gaming suffered. Fair Direct2D performance on early Direct3D hardware. I solved low FPS issues by switching to a Rage Magnum/Xpert 2000.
No idea if it was the pixel shaders, bandwidth, the insane jump from 8MB SDR->32MB DDR, or whatever, it worked. That experience solidified what I look for in upgrades.
Feature level lockout has never been an issue or concern until I switched up to a Radeon 9200 to hasten a Pentium 4 build. I eventually swapped in an AH3450 to deal with it.
Things were fine until I lost integrity with my main system of the time, which featured a similar HD3300 IGP. Changed boards and switched up to a HD6570.
This was around the era when the nuisance of Bitcoin mining started to boom. This is also where I finally made the jump to PCI-E cards. Note the features.
The feature level is DX11_0 and that was the selling point. Reasonable performance and games would get bogged hard where there's a bunch of stuff on screen.
I'm talking massive sprite count of recruits in Kingdoms or a bunch of enemies/effects in Vampire Survivors, HoloCure and anything high particle or poly.
This was also the first point where I started experiencing massive audio driven STUTTERS in desktop mode VR applications that start chug swapping video memory.
Effects like cameras and mirrors would also cause hard locks when looking at them too quickly. Just all around troublesome at the turn of the current era.
So I picked up a RX 580 Red Devil. Took a few months to get one during all the ETH mining noise.
All the missing features I cared about, unlocked. All the render issues, low FPS audio stutters and everything else vanished. Finally ran games at 1080p144.
I was able to encode with this card too. Rendering jobs finally worked right. Kind of unreasonably well. Vulkan stuff is still lethal but UnityVR handles great.
This is a pressing point for me because gaming performance is falling behind again and content creation suffers so badly. I can usually play 1080p60 but encode 720p60 or 1080p30.
Neither options look good and at some point I need to guarantee 1080p60. That is never going to happen with this card or any nearby generation in AVC or HEVC.
Which brings me to the 7900XT:
At the surface it's not great. Feature level is DX12_2 which means it runs anything. OpenCL gets an update and there's a dedicated encoder for AVC1 but that's it.
AI toolkits work, a new generation of GDDR memory and more than double the size of what I consistently fall back to because I can't get my hands on a card that works.
There is a 5 year gap between these cards. Since October I've tried getting a Hellhound but from January it's been back and forth. 8 months in and I can't make it happen.
Yesterday a very stupid opportunity appeared due to what seems to be tons of cards getting dumped right back onto the market: Highly binned 5700XT and 6900XT cards.
I'm not familiar with how either one performs. I'm under the impression that each model is well over my 580 for gaming performance but not much else.
Either way I'd lose AV1 encode, which is the selling point for 7000 series and minor improvements to AVC/HEVC. Odds and ends are probably AI related and I don't care.
What appears to be the reasonable solution here? There is a tech principle where you stop work or delay a project because newer technologies will have greater impact.
AMD is in the middle of that with RDNA4 and pouring massive time and resources into RDNA5. Not sure if I expect better options from either one.
I'm already finding situations where OLD decodes get cut in favor of H.264 and that's fine but everything I've witnessed about GPU improvement is centered around antiques.
There's some possibility the next gen gives us massive upper scale AVIF/AV2 support that isn't immediately useful then minimal or no improvement where currently needed.
If you wonder why I favor the 7900XT for 1080p over others, I have jobs that call for that level of performance plus I don't care to return to this issue for another 8 years.
Plus with platform promises floating around about better encoder support on the table, it really makes sense that there is importance in NOT missing that milestone.
It seems to be a very difficult spot to be a creator with this kind of hardware or requirements. That's what I get out of my own situation and it doesn't look better for nVidia.
Two hours ago I watched one of my favorite streamers have a meltdown for about an hour because a horribly unoptimized game tanks the frames and locks her computer.
All while struggling to get something else to work after the fact. Others with 4070Ti and upper accelerators noted similar issues when she looked it up.
Like, there just doesn't seem to be any reasonable expectation to do anything. If any of this makes any sense to you guys, thanks for taking the time to read through it all.
Is there anything that fits? Like did I miss something important? Am I in the wrong part of the GPU market?