Tuesday, July 28th 2020
AMD Ryzen 7 4700G "Renoir" iGPU Showing Playing Doom Eternal 1080p by Itself
Hot on the heels of a June story of a 11th Gen Core "Tiger Lake" processor's Gen12 Xe iGPU playing "Battlefield V" by itself (without a graphics card), Tech Epiphany bring us an equally delicious video of an AMD Ryzen 7 4700G desktop processor's Radeon Vega 8 iGPU running "Doom Eternal" by itself. id Software's latest entry to the iconic franchise is well optimized for the PC platform to begin with, but it's impressive to see the Vega 8 munch through this game at 1080p (1920 x 1080 pixels) no resolution scaling, with mostly "High" details. The game is shown running at frame-rates ranging between 42 to 47 FPS, with over 37 FPS in close-quarters combat (where the enemy models are rendered with more detail).
With 70% resolution scale, frame rates are shown climbing 50 FPS. At this point, when the detail preset is lowered to "Medium," the game inches close to the 60 FPS magic figure, swinging between 55 to 65 FPS. The game is also shown utilizing all 16 logical processors of this 8-core/16-thread processor. Despite just 8 "Vega" compute units, amounting to 512 stream processors, the iGPU in the 4700G has freedom to dial up engine clocks (GPU clocks) all the way up to 2.10 GHz, which helps it overcome much of the performance deficit compared to the Vega 11 solution found with the previous generation "Picasso" silicon. Watch the Tech Epiphany video presentation in the source link below.
Source:
Tech Epiphany (YouTube)
With 70% resolution scale, frame rates are shown climbing 50 FPS. At this point, when the detail preset is lowered to "Medium," the game inches close to the 60 FPS magic figure, swinging between 55 to 65 FPS. The game is also shown utilizing all 16 logical processors of this 8-core/16-thread processor. Despite just 8 "Vega" compute units, amounting to 512 stream processors, the iGPU in the 4700G has freedom to dial up engine clocks (GPU clocks) all the way up to 2.10 GHz, which helps it overcome much of the performance deficit compared to the Vega 11 solution found with the previous generation "Picasso" silicon. Watch the Tech Epiphany video presentation in the source link below.
66 Comments on AMD Ryzen 7 4700G "Renoir" iGPU Showing Playing Doom Eternal 1080p by Itself
Before all the apologists come swinging in with "but it's faster than Intel", you're ignorant of the bigger picture as usual. Vega is GCN and obsolete, Navi is RDNA which is RTG's current focus - which one do you think is going to get driver love going forward? Especially considering AMD's continually-precarious GPU driver situation? Or are y'all going to put your faith in "fine wine" and be let down, again? Do you know what the definition of insanity is?
The bar for iGPUs is not "faster than Intel", it's "is this the latest and greatest dGPU tech crammed into as iGPU", and as such Renoir fails to meet it. Is it better than its predecessors? Yes. Is it better than anything else in its market segment? Yes. Is it as good as it could be? No, and that makes it pretty insignificant, regardless of how many AAA titles it can play at 1080p...
... at nearly 60FPS
... with some details turned down.
Remember how Intel shoehorned dual cores for a decade ? Same thing. Plus DDR4 is on it's last legs, there is no point in a major GPU revamp at this point. Yeah it did. Are we going on the same old rhetoric of "if it wasn't a million times faster it's irrelevant" ?
APU in the past was supposed to be meshing cpu and gpu into one. Using CPU as the integer point and GPU as the floating point. Unlike Intel, placing a gpu as a display engine and video coding engine, with light gaming ability. The plan was to leverage the floating point processing power from GPU, instead of hardware decoder and encoder.
Second thing is, price and margin. I always think it would be cool to see 8 Jaguar cores with 40 CU and GDDR5 8GBs of VRAM. But how are they gonna produce a product out of those chips? Placing them in gaming notebook seems reasonable, however, how can they make great margins out of those product. Also where else can they sold those chips? They are big chips after all and Renoir is a small chip. They can sold high volume and have a high profit margin, as well as they can salvage more parts. It seems to me that it is easier to design a cooling system for a small chip with low heat output than a big chip with high heat output.
At the end of the day, it comes down to margins and technology that available for AMD to leverage. DDR 5 is coming around the corner, denser transistor density, matured architectures would allow AMD to really get down and dirty with their APU. I think Renoir is good for it is and i think that Renoir is showing off their CPU ability rather than being an APU.
More importantly, I didn't realise how unexciting Doom Eternal looks if you play it on easy. After being mildly frustrated at the random difficulty spikes in Doom2016 Nightmare and being unable to dial it down from Nightmare, I started Doom Eternal on Ultra-Violent but quickly realised that id have given us way more tools to kill with, and making monster vulnerabilities to specific weapon more obvious - so a safer game than 2016 on Nightmare as it's never overwhelmingly hard.
I still had a few arenas that took multiple attempts but surely the satisfaction of Doom's gameplay is about overcoming ridiculous odds with evasion and tactics. Playing it on easy for 'the story' doesn't really work for Doom because the story is garbage - I've forgotten it already.
OMG! Now I have to sell four GPUs,....
show me a cheaper and cleaner display output solution for troubleshooting or when you're in between cards
that hard for tpu's news editor to find just one reason ? ryzen's ram speed compability went from barely doing 3200 on 1st gen to 3800 on r3000
yet people are somehow happy that amd still uses old ass vega 8.
I hoppe xe apus completely kick its ass,cause if it doesn't,it'd frankly be a colossal fail not to outperform a gcn based soluition exactly my point
vya thinks it's cool to jump on intel but praise amd
while both using quad cores are flagship cpus and using a vega 8 as flagship apu is exactly the same thing - small steps,little innovation,cause no competition.
it's understandable on a quad core like 3400,it's a value proposition.but for a 8/16 ? how is that even good when 7nm rdna1 has been out for a year and rdna2 is close ?
if rdna2 got delayed and nvidia just oc'd turings and sold them as "plus" skus,would the same people be that enthusiastic ? cause that's what happened.intel got delays,amd used that to push higher clocked vega apus on r3000
although I compared all three and cuda is the best one,quicksync worked very well on 5775c but files are big.opengl is the slowest.
Hopefully one day they will become even more powerful.
had two r9 290 cards die,each took a month for rma
had no post issues,had to check what component it was
sold my 1080ti for a really nice sum opportunistically,had to wait for a new card to arrive
and that costs 20 dollars,the output is already there on the mobo
Seriously man, its times like these the truth about bias comes out... Do you see it, or? Can you admit its strange to look at it that way? Or do you have some good reason for it? It puzzles me as you seem like an intelligent person. (No sarcasm involved here)
Another way to look at it as well: look at Zen. The very moment AMD offered something the competitor had no answer to, they won market- and mind share bigtime. What @Assimilator and @cucker tarlson are saying, why the hell are they not pushing IGP to a level that puts them in a similar position for bottom end GPU performance? And perhaps snipe some of the midrange along with it? They DID pursue IGP for a long time... What's left of that strategy then? And its double strange because they are now FINALLY in a position to combine strong CPU performance with a strong IGP, in a laptop. That is a huge potential market and they can take share from not just Intel, but also Nvidia. Its really weird to see GPU tech so behind in that sense.
Similarly, but that's just me and thoughts running wild... why is there no movement towards a Threadripper-sized socket with ditto chip that has lots of space for IGP? That would enable dGPU perf from the CPU socket right away, with lower clock and a much higher EU count, if you can keep it from burning up, which I'm sure is possible given the larger surface area and if you look at how low they can push Ryzen TDPs.. Intel had its ultrabooks and pretty much dedicated chips for them, why is AMD not moving towards thought leadership in that sense? They have every reason to. Well, I'm sure your patients are still alive, but surely you've seen how SLI fingers have vanished lately. That means its becoming increasingly not worthwhile for devs to cater to them.
If it were as simple as copy-pasting RDNA logic into the design, AMD would have done it by now. Chances are good that current Vega APU cures are highly-optimised for DDR4 and HSA / Unified memory access. Adding that to Navi when AMD already have their hands full with Big Navi, console chips, Zen4, TSMC's EUV tweaks, and of course the importance of getting things right first time given that they're now bidding against a host of other companies for fab time at TSMC....
The multi gpu patent you describe, is of course a perfect fit for Infinity Fabric - as u ~say - its like transferring zen architecture to gpu.
Cache coherency is Fabric in a nutshell. AMDs focus on it is at the root of their success, so the patents you describe dont surprise me. Thhat is exactly where i think they would be trying to go.
clearly we have a task (graphics) too big for a single gpu, but we are at an enduring impasse in teaming multiple processors (sli & crossfire ~fails)... multiple cheap, easily cooled & efficient gpuS would have more drastic effect on gpu than zen did on cpu.
Specifically , (I am not competent in gpu tech, but) i have long suspected that Vega was prefered for renoir for secret reasons - not the timing factors officially stated.
There seem to be apps (scientific/math e.g.?) where Vega is preferred. Maybe its more suited to some even more tempting prize than better consumer gaming?
AI is changing the usual processing paradigms - the AI raw data is potentially so vast that consequent slow costly transmission any distance, make processing by mini nodes on the edge of the data storage much more attractive - ~decentralising.
Maybe banks of tightly integrated hybrid processor APU's are suited, & can form a big new market to add to their already broad appeal?
The patents would fit amd's MO very well - they love serving multiple markets with easily scaled variants of a few standard ingredients - or even better win a new tier or market w/ ~existing recipes. (the 3900 & 3950 12 & 16c zens paired w/ x570 mobos invaded, a big patch of workstation turf using desktop cpuS recently). Similarly, 64c & 2x ram TR has charged upscale into ~epyc turf.