• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

AMD Ryzen 7 4700G "Renoir" iGPU Showing Playing Doom Eternal 1080p by Itself

btarunr

Editor & Senior Moderator
Staff member
Joined
Oct 9, 2007
Messages
47,670 (7.43/day)
Location
Dublin, Ireland
System Name RBMK-1000
Processor AMD Ryzen 7 5700G
Motherboard Gigabyte B550 AORUS Elite V2
Cooling DeepCool Gammax L240 V2
Memory 2x 16GB DDR4-3200
Video Card(s) Galax RTX 4070 Ti EX
Storage Samsung 990 1TB
Display(s) BenQ 1440p 60 Hz 27-inch
Case Corsair Carbide 100R
Audio Device(s) ASUS SupremeFX S1220A
Power Supply Cooler Master MWE Gold 650W
Mouse ASUS ROG Strix Impact
Keyboard Gamdias Hermes E2
Software Windows 11 Pro
Hot on the heels of a June story of a 11th Gen Core "Tiger Lake" processor's Gen12 Xe iGPU playing "Battlefield V" by itself (without a graphics card), Tech Epiphany bring us an equally delicious video of an AMD Ryzen 7 4700G desktop processor's Radeon Vega 8 iGPU running "Doom Eternal" by itself. id Software's latest entry to the iconic franchise is well optimized for the PC platform to begin with, but it's impressive to see the Vega 8 munch through this game at 1080p (1920 x 1080 pixels) no resolution scaling, with mostly "High" details. The game is shown running at frame-rates ranging between 42 to 47 FPS, with over 37 FPS in close-quarters combat (where the enemy models are rendered with more detail).

With 70% resolution scale, frame rates are shown climbing 50 FPS. At this point, when the detail preset is lowered to "Medium," the game inches close to the 60 FPS magic figure, swinging between 55 to 65 FPS. The game is also shown utilizing all 16 logical processors of this 8-core/16-thread processor. Despite just 8 "Vega" compute units, amounting to 512 stream processors, the iGPU in the 4700G has freedom to dial up engine clocks (GPU clocks) all the way up to 2.10 GHz, which helps it overcome much of the performance deficit compared to the Vega 11 solution found with the previous generation "Picasso" silicon. Watch the Tech Epiphany video presentation in the source link below.



View at TechPowerUp Main Site
 
Delicious indeed. Now if only hybrid crossfire had been worked out properly...
 
Holy shit, how, AI?

Did the CPU finish the game?

Delicious indeed. Now if only hybrid crossfire had been worked out properly...

Dream on... with the new pipeline for RT in GPUs I don't think any sort of mGPU solution is going to really make waves anytime soon. IF they do, I think its more likely they will dedicate specific render tasks to one of the GPUs instead of the classic split / consecutive frame approach. We already have some of those tricks.
 
lol,"munch through" at 40 fps :roll:
which is still way slower than 2014 midrange 970 running all high

I suggest anyone spend 60 dollars on a card instead of the game
 
Sound nice, but we still need a 3080 competitor.

AMD did very well in CPU side overall but ignored the GPU high end lately and that's not good for the end client at all.
 
impress for an igp



the cheapest 1050 Ti 165 euro in France



perfect for my wife's pc who plays small games
 
It's funny, it turns out that Vega was one of the most scalable GPU architecture to date. From 300W compute cards to integrated graphics and running at over 2 Ghz at that.
 
Last edited:
Not bad :) The iGPUs are getting better that is for sure. Wonder if the vega 11 is much faster.

sounds like vega 8 same as in 2200g
No it doesn't. For the 2200G - Low details all the way and 720p (with resolution upscale) you get 43FPS average.
 
Last edited:
Holy shit, how, AI?

Did the CPU finish the game?



Dream on... with the new pipeline for RT in GPUs I don't think any sort of mGPU solution is going to really make waves anytime soon. IF they do, I think its more likely they will dedicate specific render tasks to one of the GPUs instead of the classic split / consecutive frame approach. We already have some of those tricks.

Actually, you could be wrong about that.... While prowling the AMD patents, something I do regularly, I came across a recent one detailing something called "GPU masking". Now, in the patent, I believe it had outlined the use of this technique on multi-gpu MCMs, which is basically taking the strategy used on Ryzen and applying it to GPUs.

The secret to it is that the system and API (DirectX) sees the MCM GPU as a single, logical entity and if I remember correctly, it accomplished the same feat with multiple memory pools. I take this to mean that AMD has figured out a way to use a chiplet approach with GPUs without game programmers and the API itself having to program specifically for multiple GPUs. If this is the case, then AMD may be able to bust the GPU market wide open.

That said, I would imagine that the same technique could possibly be applied to multiple gpu/videocard setups somehow.
 
Indeed MCM is easy for GPU, the reason it hasn't been done I suspect is simply because there is no need for now.
 
Not bad :) The iGPUs are getting better that is for sure. Wonder if the vega 11 is much faster.


No it doesn't. For the 2200G - Low details all the way and 720p (with resolution upscale) you get 43FPS average.

Don't feed the troll, just do a user post history, nothing useful comes from him, its a mystery he wasn't banned his first day.
 
dream on... with the new pipeline for RT in GPUs I don't think any sort of mGPU solution is going to really make waves anytime soon. IF they do, I think its more likely they will dedicate specific render tasks to one of the GPUs instead of the classic split / consecutive frame approach. We already have some of those tricks.

Hence "had been".

lol,"munch through" at 40 fps :roll:
which is still way slower than 2014 midrange 970 running all high

I suggest anyone spend 60 dollars on a card instead of the game

Sigh.
 
Sound nice, but we still need a 3080 competitor.

AMD did very well in CPU side overall but ignored the GPU high end lately and that's not good for the end client at all.

Whenever people say statements like this, I feel like they're completely ignoring the fact that the T.A.M. for x86 is far larger than that of the consumer dGPU market (as well as the commercial dGPU market and both combined). AMD was in a precarious position, and they saw an opportunity in x86 and took it, and it worked out incredibly well for them, which I believe demonstrates that their focus on x86 was the smartest choice.

Furthermore, AMD isn't as well positioned in x86 as much as people assume. Yes, they created several salients in the feontline, but they still need to consolidate those advances so as not to be driven back by a resurgent Intel. That's why Intel delaying 7nm is actually good for all of us, as AMD still needs to capture much more market share, specifically in mobile, OEM, and enterprise, so that when Intel does strike back, they're in a much better position, especially on the front of mindshare.

Mindshare is where Nvidia beat them, as there were several times in the late 2000s where AMD not only had faster video cards, but cheaper as well, and Nvidia still outsold them. To rational enthusiasts, this doesn't make sense, but unfortunately, the vast majority of consumers don't approach consumerism from a rational angle in which they tediously compare empirical data as we do.... Whether they want to admit it or not, they base their decision on feelings (what psychologists has identified as social identity theory and in/out group psychology, which also explains the behavior of fanboys).

AMD needs to continue building mindshare in x86 where they're winning so that the average consumer identifies the brand with superiority, once that is achieved, AMD will have much greater success in the dGPU market and products like the 5700xt, for example, which is both a better value than the 2070S and 2060S, will translate to better sales than those two competing cards as well.

Don't worry, Nvidia's time will come as today's empires are tomorrow's ashes, and I personally believe that RDNA2 will be a harbinger of that time. When the new consoles inevitably impress both reviewers and regular consumers alike, and more of them associate that clear intergenerational increase in performance with AMD and their hardware, it will translate and reverberate in other related markets. Let's have a bit of faith in Lisa Su, as she certainly deserves it. RDNA2 could very well be a Zen2 moment, and if what I've been reading about leaks is true, Nvidia is concerned.
 
Not bad :) The iGPUs are getting better that is for sure. Wonder if the vega 11 is much faster.


No it doesn't. For the 2200G - Low details all the way and 720p (with resolution upscale) you get 43FPS average.
lower gpu clock,slower cpu,slower memory,different game sequence,dumb comparison

the news says it's still vega 8,you can spin it however you want.
 
Very cool. Can't wait to upgrade my 3400G HTPC
 
Actually, you could be wrong about that.... While prowling the AMD patents, something I do regularly, I came across a recent one detailing something called "GPU masking". Now, in the patent, I believe it had outlined the use of this technique on multi-gpu MCMs, which is basically taking the strategy used on Ryzen and applying it to GPUs.

The secret to it is that the system and API (DirectX) sees the MCM GPU as a single, logical entity and if I remember correctly, it accomplished the same feat with multiple memory pools. I take this to mean that AMD has figured out a way to use a chiplet approach with GPUs without game programmers and the API itself having to program specifically for multiple GPUs. If this is the case, then AMD may be able to bust the GPU market wide open.

That said, I would imagine that the same technique could possibly be applied to multiple gpu/videocard setups somehow.

We also saw an Nvidia paper on MCM design. And yet, neither Navi or Ampére are anything like it. That was pre-DX12, too.
We also saw SLI and Crossfire die off.
We also saw LucidVirtu and other solutions to combine IGP and dGPU.
We also saw mGPU in DirectX12 limited to a proof of concept in Ashes of Singu. but even that doesn't work anymore.

And yes, there is of course the off chance AMD will 'break open' the GPU space. Like they are poised to do every year. Mind if I take my truckload of salt? So far, they are trailing by more than a full generation in both performance and featureset. The gap isn't getting smaller. Its getting bigger.
 
Pretty decent for an IGP, hope we see these in some NUC's in future.

Also Cucker you make cringe.
 
Pretty decent for an IGP, hope we see these in some NUC's in future.

Also Cucker you make cringe.
lol,sorry,I should be more excited about a vega 8 igpu coming in h2 2020
 
Last edited:
It's funny, it turns out that Vega was one of the most scalable GPU architecture to date. From 300W compute cards to integrated graphics and running at over 2 Ghz at that.

The arch is actually pretty good; esp when lower clocks or less amount of blocks; even intel integrated that into it's CPU, but what the VEGA 64 was was more designed for compute rather then gaming.
 
It's funny, it turns out that Vega was one of the most scalable GPU architecture to date. From 300W compute cards to integrated graphics and running at over 2 Ghz at that.
So scalability works down now ?
 
iGPU was and will be useless for most gaming. Now if AMD can somehow use the iGPU for actually accelerated computing like what Intel did with theirs, that would be a completely different story.
 
iGPU was and will be useless for most gaming. Now if AMD can somehow use the iGPU for actually accelerated computing like what Intel did with theirs, that would be a completely different story.
Care to give an example?

I believe that both support openCL and direct compute and feature video decoding/encoding blocks that software vendors can utilize.
 
Last edited:
Holy shit, how, AI?

Did the CPU finish the game?



Dream on... with the new pipeline for RT in GPUs I don't think any sort of mGPU solution is going to really make waves anytime soon. IF they do, I think its more likely they will dedicate specific render tasks to one of the GPUs instead of the classic split / consecutive frame approach. We already have some of those tricks.
That's one possibility of many approaches. I could see a GPU dedicated to just the TMU's, mip maps, LOD, geometry, lighting/shading passes, denoise, sharpening/blur, physics, AI, scaling, compression, storage caching, ect really a mixture of any of those things could probably be split and balanced between several GPU's. I could certainly see a mGPU where the second GPU use pretty much dedicated to post process render techniques along with physics, AI, and scaling and you might have a 3rd GPU dedicated purely towards RTRT and a primary GPU for pure rasterization or make both lean heavily towards one versus the other like a 25% to 75% weight ratio.

I'm probably be over optimistic about though and it's mostly just wishful thinking, but I could see a kind of post process and other techniques and drastically different rendering approaches like rasterization versus ray tracing being separated a bit into a round robin mGPU synchronization structure in a kind variable rate shading and semi modular flexible manner that could even somewhat resemble a bit of what Lucid Hydra attempted to achieve with mGPU. The real question is does either AMD, Nvidia, or even Intel have compelling enough reason to turn that kind of thing into a reality!?

Another interesting thing to think about is augmented reality and how that could impact GPU's of the future will we see a GPU dedicated to that or perhaps some sort of 3D augmented glasses that works in tandem with a display with a integrated GPU that helps it do some kind of post process magic on the display end similar to the mClassic, but far more advanced. On the fly post process is certainly not beyond the means of possibility and could be done by a external GPU and could be integrated into a display itself. A good example of some cool real time post process hardware is something like Roland V-Link for VJ stuff we've also got stuff like multi viewers that can split 4 signals into quadrants maybe we'll see GPU's that can split the entire scene rendering into Freesync quadrants renders by different GPU's for each section. Possibly you could even see checkerboard rendering for the quadrants and even interlacing or like a quick interlaced scaling that can be a fall back routine.

If you think about all the tech that is available it seems like these companies could come up something that actually works reasonable well and w/o all the terrible pitfalls of SLI/CF or Lucid Hydra or even something like SoftTH. I mean really surely somebody can come up with something that actually pretty much works mostly flawlessly. I'm surprised we haven't seen a technique for checkerboard at the interlacing scan line level yet and combined with something like Freesync on the display itself along with on the fly post process scaling and basic image enhancement. I defiantly feel as if mGPU just needs a good break thru to really shine and be truly be a appealing nonsensical solution.

The way I see it if these big companies like AMD/Nvidia/Intel can't come up with good approaches lets damn well hope someone like Marseille or Darebee can up the anti and raise the bar. Something interesting I had thought of that could be a neat experiment is taking a projector maybe a pico and doing display cloning on a GPU then aligning and matching the pixels on the projector aimed at a display. Now if you can adjust the contrast/brightness and some stuff like that on the projector you might essentially get sort of a free layer anti-aliasing effect going on due to the blending of the displays, but idk how effective it could be in practice fun to hypothesize about what it might be like though. You could think of that as sort of like a deep fake sort of effect.
 
Back
Top