Tuesday, July 28th 2020
AMD Ryzen 7 4700G "Renoir" iGPU Showing Playing Doom Eternal 1080p by Itself
Hot on the heels of a June story of a 11th Gen Core "Tiger Lake" processor's Gen12 Xe iGPU playing "Battlefield V" by itself (without a graphics card), Tech Epiphany bring us an equally delicious video of an AMD Ryzen 7 4700G desktop processor's Radeon Vega 8 iGPU running "Doom Eternal" by itself. id Software's latest entry to the iconic franchise is well optimized for the PC platform to begin with, but it's impressive to see the Vega 8 munch through this game at 1080p (1920 x 1080 pixels) no resolution scaling, with mostly "High" details. The game is shown running at frame-rates ranging between 42 to 47 FPS, with over 37 FPS in close-quarters combat (where the enemy models are rendered with more detail).
With 70% resolution scale, frame rates are shown climbing 50 FPS. At this point, when the detail preset is lowered to "Medium," the game inches close to the 60 FPS magic figure, swinging between 55 to 65 FPS. The game is also shown utilizing all 16 logical processors of this 8-core/16-thread processor. Despite just 8 "Vega" compute units, amounting to 512 stream processors, the iGPU in the 4700G has freedom to dial up engine clocks (GPU clocks) all the way up to 2.10 GHz, which helps it overcome much of the performance deficit compared to the Vega 11 solution found with the previous generation "Picasso" silicon. Watch the Tech Epiphany video presentation in the source link below.
Source:
Tech Epiphany (YouTube)
With 70% resolution scale, frame rates are shown climbing 50 FPS. At this point, when the detail preset is lowered to "Medium," the game inches close to the 60 FPS magic figure, swinging between 55 to 65 FPS. The game is also shown utilizing all 16 logical processors of this 8-core/16-thread processor. Despite just 8 "Vega" compute units, amounting to 512 stream processors, the iGPU in the 4700G has freedom to dial up engine clocks (GPU clocks) all the way up to 2.10 GHz, which helps it overcome much of the performance deficit compared to the Vega 11 solution found with the previous generation "Picasso" silicon. Watch the Tech Epiphany video presentation in the source link below.
66 Comments on AMD Ryzen 7 4700G "Renoir" iGPU Showing Playing Doom Eternal 1080p by Itself
Did the CPU finish the game? Dream on... with the new pipeline for RT in GPUs I don't think any sort of mGPU solution is going to really make waves anytime soon. IF they do, I think its more likely they will dedicate specific render tasks to one of the GPUs instead of the classic split / consecutive frame approach. We already have some of those tricks.
which is still way slower than 2014 midrange 970 running all high
www.purepc.pl/test-wydajnosci-doom-eternal-pc-piekielnie-dobra-optymalizacja?page=0,4
I suggest anyone spend 60 dollars on a card instead of the game
AMD did very well in CPU side overall but ignored the GPU high end lately and that's not good for the end client at all.
yawn
hope intel xe leaps them this or next year
the cheapest 1050 Ti 165 euro in France
perfect for my wife's pc who plays small games
The secret to it is that the system and API (DirectX) sees the MCM GPU as a single, logical entity and if I remember correctly, it accomplished the same feat with multiple memory pools. I take this to mean that AMD has figured out a way to use a chiplet approach with GPUs without game programmers and the API itself having to program specifically for multiple GPUs. If this is the case, then AMD may be able to bust the GPU market wide open.
That said, I would imagine that the same technique could possibly be applied to multiple gpu/videocard setups somehow.
Furthermore, AMD isn't as well positioned in x86 as much as people assume. Yes, they created several salients in the feontline, but they still need to consolidate those advances so as not to be driven back by a resurgent Intel. That's why Intel delaying 7nm is actually good for all of us, as AMD still needs to capture much more market share, specifically in mobile, OEM, and enterprise, so that when Intel does strike back, they're in a much better position, especially on the front of mindshare.
Mindshare is where Nvidia beat them, as there were several times in the late 2000s where AMD not only had faster video cards, but cheaper as well, and Nvidia still outsold them. To rational enthusiasts, this doesn't make sense, but unfortunately, the vast majority of consumers don't approach consumerism from a rational angle in which they tediously compare empirical data as we do.... Whether they want to admit it or not, they base their decision on feelings (what psychologists has identified as social identity theory and in/out group psychology, which also explains the behavior of fanboys).
AMD needs to continue building mindshare in x86 where they're winning so that the average consumer identifies the brand with superiority, once that is achieved, AMD will have much greater success in the dGPU market and products like the 5700xt, for example, which is both a better value than the 2070S and 2060S, will translate to better sales than those two competing cards as well.
Don't worry, Nvidia's time will come as today's empires are tomorrow's ashes, and I personally believe that RDNA2 will be a harbinger of that time. When the new consoles inevitably impress both reviewers and regular consumers alike, and more of them associate that clear intergenerational increase in performance with AMD and their hardware, it will translate and reverberate in other related markets. Let's have a bit of faith in Lisa Su, as she certainly deserves it. RDNA2 could very well be a Zen2 moment, and if what I've been reading about leaks is true, Nvidia is concerned.
the news says it's still vega 8,you can spin it however you want.
We also saw SLI and Crossfire die off.
We also saw LucidVirtu and other solutions to combine IGP and dGPU.
We also saw mGPU in DirectX12 limited to a proof of concept in Ashes of Singu. but even that doesn't work anymore.
And yes, there is of course the off chance AMD will 'break open' the GPU space. Like they are poised to do every year. Mind if I take my truckload of salt? So far, they are trailing by more than a full generation in both performance and featureset. The gap isn't getting smaller. Its getting bigger.
Also Cucker you make cringe.
I believe that both support openCL and direct compute and feature video decoding/encoding blocks that software vendors can utilize.
I'm probably be over optimistic about though and it's mostly just wishful thinking, but I could see a kind of post process and other techniques and drastically different rendering approaches like rasterization versus ray tracing being separated a bit into a round robin mGPU synchronization structure in a kind variable rate shading and semi modular flexible manner that could even somewhat resemble a bit of what Lucid Hydra attempted to achieve with mGPU. The real question is does either AMD, Nvidia, or even Intel have compelling enough reason to turn that kind of thing into a reality!?
Another interesting thing to think about is augmented reality and how that could impact GPU's of the future will we see a GPU dedicated to that or perhaps some sort of 3D augmented glasses that works in tandem with a display with a integrated GPU that helps it do some kind of post process magic on the display end similar to the mClassic, but far more advanced. On the fly post process is certainly not beyond the means of possibility and could be done by a external GPU and could be integrated into a display itself. A good example of some cool real time post process hardware is something like Roland V-Link for VJ stuff we've also got stuff like multi viewers that can split 4 signals into quadrants maybe we'll see GPU's that can split the entire scene rendering into Freesync quadrants renders by different GPU's for each section. Possibly you could even see checkerboard rendering for the quadrants and even interlacing or like a quick interlaced scaling that can be a fall back routine.
If you think about all the tech that is available it seems like these companies could come up something that actually works reasonable well and w/o all the terrible pitfalls of SLI/CF or Lucid Hydra or even something like SoftTH. I mean really surely somebody can come up with something that actually pretty much works mostly flawlessly. I'm surprised we haven't seen a technique for checkerboard at the interlacing scan line level yet and combined with something like Freesync on the display itself along with on the fly post process scaling and basic image enhancement. I defiantly feel as if mGPU just needs a good break thru to really shine and be truly be a appealing nonsensical solution.
The way I see it if these big companies like AMD/Nvidia/Intel can't come up with good approaches lets damn well hope someone like Marseille or Darebee can up the anti and raise the bar. Something interesting I had thought of that could be a neat experiment is taking a projector maybe a pico and doing display cloning on a GPU then aligning and matching the pixels on the projector aimed at a display. Now if you can adjust the contrast/brightness and some stuff like that on the projector you might essentially get sort of a free layer anti-aliasing effect going on due to the blending of the displays, but idk how effective it could be in practice fun to hypothesize about what it might be like though. You could think of that as sort of like a deep fake sort of effect.