• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

AMD Ryzen 7 4700G "Renoir" iGPU Showing Playing Doom Eternal 1080p by Itself

Care to give an example?
QuickSync. It's the only reason anyone with a graphics card doesn't pick the -F variant to save $20.
 
QuickSync. It's the only reason anyone with a graphics card doesn't pick the -F variant to save $20.

Except that doesn't really have anything to do with compute acceleration. Every GPU has encode/decode hardware.
 
cucker has a very valid point, though. Vega was developed to compete with Pascal, which it didn't. Why is AMD continuing to shoehorn this barely-competitive GPU architecture from 2017 into APUs they're releasing in 2020? Why aren't they using the newer and much more power-efficient Navi?

Before all the apologists come swinging in with "but it's faster than Intel", you're ignorant of the bigger picture as usual. Vega is GCN and obsolete, Navi is RDNA which is RTG's current focus - which one do you think is going to get driver love going forward? Especially considering AMD's continually-precarious GPU driver situation? Or are y'all going to put your faith in "fine wine" and be let down, again? Do you know what the definition of insanity is?

The bar for iGPUs is not "faster than Intel", it's "is this the latest and greatest dGPU tech crammed into as iGPU", and as such Renoir fails to meet it. Is it better than its predecessors? Yes. Is it better than anything else in its market segment? Yes. Is it as good as it could be? No, and that makes it pretty insignificant, regardless of how many AAA titles it can play at 1080p...

... at nearly 60FPS
... with some details turned down.
 
Why is AMD continuing to shoehorn this barely-competitive GPU architecture from 2017

Because it works ? Does Intel have something earth shattering and we missed it ?

Remember how Intel shoehorned dual cores for a decade ? Same thing. Plus DDR4 is on it's last legs, there is no point in a major GPU revamp at this point.

Vega was developed to compete with Pascal, which it didn't.

Yeah it did. Are we going on the same old rhetoric of "if it wasn't a million times faster it's irrelevant" ?
 
Last edited:
AI has finally attained awareness, and the first thing it does is play Doom because it's protesting work conditions and wages.
 
Because it works ? Does Intel have something earth shattering and we missed it ?

"Because it works" is how we got 3 successors to Bulldozer.

Remember how Intel shoehorned dual cores for a decade ? Same thing.

Most people, myself included, have a big issue with Intel's lack of innovation - as we should. I don't intend to hold AMD to different standards.

Plus DDR4 is on it's last legs, there is no point in a major GPU revamp at this point.

Intel's roadmap has DDR5 in 2021 (which we know won't happen), AMD's 2022. So "last legs" = "2 years"? Remind me, how often is AMD intending to release a new version of Zen?

Yeah it did. Are we going on the same old rhetoric of "if it wasn't a million times faster it's irrelevant" ?

Considering how low the bar for integrated graphics has been set by Intel, yes.
 
I take this to mean that AMD has figured out a way to use a chiplet approach with GPUs without game programmers and the API itself having to program specifically for multiple GPUs. If this is the case, then AMD may be able to bust the GPU market wide open.

It means this is how they envision it working. Doesn't mean they can or cannot do it yet. I wish you needed a working prototype to file a patent...
 
"Because it works" is how we got 3 successors to Bulldozer.

You know Bulldozer and it's iterations worked just fine for their price. What you may fail to understand is what APUs are and what they are intended for. If you hope AMD or Intel will pour they heart and soul in making the fastest imaginable integrated GPU you are sorely mistaken. You think it would be difficult for AMD to say double or even triple their CU counts ? The reason they're not doing it is because APUs are meant to offer low end performance and be cheap, that being said you can't expect much more than we already have right know.
 
First of all, the Vega graphics in Renoir was planned ahead of time. Navi launched half a year before Renoir and it was always meant to for Vega graphics. Also, this Vega graphics has been optimised for mobile low power usage. Navi is unlikely optimised in low power usage and Vega is a mature and reliable architecture when compare to Navi. AMD had to make sure that nothing goes in the way of messing this launch.

APU in the past was supposed to be meshing cpu and gpu into one. Using CPU as the integer point and GPU as the floating point. Unlike Intel, placing a gpu as a display engine and video coding engine, with light gaming ability. The plan was to leverage the floating point processing power from GPU, instead of hardware decoder and encoder.

Second thing is, price and margin. I always think it would be cool to see 8 Jaguar cores with 40 CU and GDDR5 8GBs of VRAM. But how are they gonna produce a product out of those chips? Placing them in gaming notebook seems reasonable, however, how can they make great margins out of those product. Also where else can they sold those chips? They are big chips after all and Renoir is a small chip. They can sold high volume and have a high profit margin, as well as they can salvage more parts. It seems to me that it is easier to design a cooling system for a small chip with low heat output than a big chip with high heat output.

At the end of the day, it comes down to margins and technology that available for AMD to leverage. DDR 5 is coming around the corner, denser transistor density, matured architectures would allow AMD to really get down and dirty with their APU. I think Renoir is good for it is and i think that Renoir is showing off their CPU ability rather than being an APU.
 
but ignored the GPU high end lately
This was for two reasons. 1. AMD needed to focus their resources on what could make them the most money and 2. There is much more money in the mid-range and budget sector of the GPU market. Much more money to be made. And let's be fair, the 5600/XT and 5700/XT are very winning cards for the money.
 
That's really impressive performance from an IGP, though it's worth noting that Doom Eternal scales up and down really well and still looks incredible at lowest settings.

More importantly, I didn't realise how unexciting Doom Eternal looks if you play it on easy. After being mildly frustrated at the random difficulty spikes in Doom2016 Nightmare and being unable to dial it down from Nightmare, I started Doom Eternal on Ultra-Violent but quickly realised that id have given us way more tools to kill with, and making monster vulnerabilities to specific weapon more obvious - so a safer game than 2016 on Nightmare as it's never overwhelmingly hard.

I still had a few arenas that took multiple attempts but surely the satisfaction of Doom's gameplay is about overcoming ridiculous odds with evasion and tactics. Playing it on easy for 'the story' doesn't really work for Doom because the story is garbage - I've forgotten it already.
 
Are you stating that the encode/decode functionality is somehow broken in renoir compared to intels offering?

No, he's stating that AMD doesn't have Quick Sync. Do you have reading comprehension problems?
 
QuickSync. It's the only reason anyone with a graphics card doesn't pick the -F variant to save $20.
you are joking right ?
show me a cheaper and cleaner display output solution for troubleshooting or when you're in between cards
that hard for tpu's news editor to find just one reason ?

cucker has a very valid point, though. Vega was developed to compete with Pascal, which it didn't. Why is AMD continuing to shoehorn this barely-competitive GPU architecture from 2017 into APUs they're releasing in 2020? Why aren't they using the newer and much more power-efficient Navi?

Before all the apologists come swinging in with "but it's faster than Intel", you're ignorant of the bigger picture as usual. Vega is GCN and obsolete, Navi is RDNA which is RTG's current focus - which one do you think is going to get driver love going forward? Especially considering AMD's continually-precarious GPU driver situation? Or are y'all going to put your faith in "fine wine" and be let down, again? Do you know what the definition of insanity is?

The bar for iGPUs is not "faster than Intel", it's "is this the latest and greatest dGPU tech crammed into as iGPU", and as such Renoir fails to meet it. Is it better than its predecessors? Yes. Is it better than anything else in its market segment? Yes. Is it as good as it could be? No, and that makes it pretty insignificant, regardless of how many AAA titles it can play at 1080p...

... at nearly 60FPS
... with some details turned down.
ryzen's ram speed compability went from barely doing 3200 on 1st gen to 3800 on r3000
yet people are somehow happy that amd still uses old ass vega 8.
I hoppe xe apus completely kick its ass,cause if it doesn't,it'd frankly be a colossal fail not to outperform a gcn based soluition

Most people, myself included, have a big issue with Intel's lack of innovation - as we should. I don't intend to hold AMD to different standards.
exactly my point
vya thinks it's cool to jump on intel but praise amd
while both using quad cores are flagship cpus and using a vega 8 as flagship apu is exactly the same thing - small steps,little innovation,cause no competition.

it's understandable on a quad core like 3400,it's a value proposition.but for a 8/16 ? how is that even good when 7nm rdna1 has been out for a year and rdna2 is close ?

if rdna2 got delayed and nvidia just oc'd turings and sold them as "plus" skus,would the same people be that enthusiastic ? cause that's what happened.intel got delays,amd used that to push higher clocked vega apus on r3000
 
Last edited:
All I am going to say if my discrete hyper cooled Vega 64 ran at 2.1 GHZ I would not be budgeting for whatever Big Navi is. That clock speed is crazy and Doom Eternal at 1080P (High) is not possible with any of the current desktop APUs.
 
Are you stating that the encode/decode functionality is somehow broken in renoir compared to intels offering?
if it does opengl it should be fine for i.e. da vinci
although I compared all three and cuda is the best one,quicksync worked very well on 5775c but files are big.opengl is the slowest.
 
doing 30-40fps at 1080p with High settings for an iGPU is very impressive. Incoming low-baller gaming PC from me xD
 
Proper iGPU gives people additional options should their main GPU be faulty. I can't stress enough how many times I had my main GPU die and I found myself without a graphics rendering device. This seems to be a great gap filler should the need arise and it seems very capable of playing older game titles and some newer ones too.

Hopefully one day they will become even more powerful.
 
Proper iGPU gives people additional options should their main GPU be faulty. I can't stress enough how many times I had my main GPU die and I found myself without a graphics rendering device. This seems to be a great gap filler should the need arise and it seems very capable of playing older game titles and some newer ones too.

Hopefully one day they will become even more powerful.
gotta be really unimaginative not to think of a single use for an igpu

had two r9 290 cards die,each took a month for rma
had no post issues,had to check what component it was
sold my 1080ti for a really nice sum opportunistically,had to wait for a new card to arrive

and that costs 20 dollars,the output is already there on the mobo
 
Because it works ? Does Intel have something earth shattering and we missed it ?

Remember how Intel shoehorned dual cores for a decade ? Same thing. Plus DDR4 is on it's last legs, there is no point in a major GPU revamp at this point.



Yeah it did. Are we going on the same old rhetoric of "if it wasn't a million times faster it's irrelevant" ?

Lol, the man above you is literally saying 'because it works' is not enough and 'better than Intel' isn't either. Your first response: but it works and is faster than Intel. That is just about what Intel was doing the last decade with their CPUs as a whole, and I don't remember you saying that was just fine.

Seriously man, its times like these the truth about bias comes out... Do you see it, or? Can you admit its strange to look at it that way? Or do you have some good reason for it? It puzzles me as you seem like an intelligent person. (No sarcasm involved here)

Another way to look at it as well: look at Zen. The very moment AMD offered something the competitor had no answer to, they won market- and mind share bigtime. What @Assimilator and @cucker tarlson are saying, why the hell are they not pushing IGP to a level that puts them in a similar position for bottom end GPU performance? And perhaps snipe some of the midrange along with it? They DID pursue IGP for a long time... What's left of that strategy then? And its double strange because they are now FINALLY in a position to combine strong CPU performance with a strong IGP, in a laptop. That is a huge potential market and they can take share from not just Intel, but also Nvidia. Its really weird to see GPU tech so behind in that sense.

Similarly, but that's just me and thoughts running wild... why is there no movement towards a Threadripper-sized socket with ditto chip that has lots of space for IGP? That would enable dGPU perf from the CPU socket right away, with lower clock and a much higher EU count, if you can keep it from burning up, which I'm sure is possible given the larger surface area and if you look at how low they can push Ryzen TDPs.. Intel had its ultrabooks and pretty much dedicated chips for them, why is AMD not moving towards thought leadership in that sense? They have every reason to.

They're DEAD!?

OMG! Now I have to sell four GPUs,....

Well, I'm sure your patients are still alive, but surely you've seen how SLI fingers have vanished lately. That means its becoming increasingly not worthwhile for devs to cater to them.
 
Last edited:
I can't remember what AMD's excuse for Renoir still being Vega is, but Cezanne (5000-series APUs) will be Zen3 but also still Vega.

If it were as simple as copy-pasting RDNA logic into the design, AMD would have done it by now. Chances are good that current Vega APU cures are highly-optimised for DDR4 and HSA / Unified memory access. Adding that to Navi when AMD already have their hands full with Big Navi, console chips, Zen4, TSMC's EUV tweaks, and of course the importance of getting things right first time given that they're now bidding against a host of other companies for fab time at TSMC....
 
Actually, you could be wrong about that.... While prowling the AMD patents, something I do regularly, I came across a recent one detailing something called "GPU masking". Now, in the patent, I believe it had outlined the use of this technique on multi-gpu MCMs, which is basically taking the strategy used on Ryzen and applying it to GPUs.

The secret to it is that the system and API (DirectX) sees the MCM GPU as a single, logical entity and if I remember correctly, it accomplished the same feat with multiple memory pools. I take this to mean that AMD has figured out a way to use a chiplet approach with GPUs without game programmers and the API itself having to program specifically for multiple GPUs. If this is the case, then AMD may be able to bust the GPU market wide open.

That said, I would imagine that the same technique could possibly be applied to multiple gpu/videocard setups somehow.

That is very cool. I too think amd is up to something & ur the first who seems on a similar wave length. - not because i am clever, but for the humble reason that i think lisa knows what she is doing better than most bloggers.

The multi gpu patent you describe, is of course a perfect fit for Infinity Fabric - as u ~say - its like transferring zen architecture to gpu.

Cache coherency is Fabric in a nutshell. AMDs focus on it is at the root of their success, so the patents you describe dont surprise me. Thhat is exactly where i think they would be trying to go.

clearly we have a task (graphics) too big for a single gpu, but we are at an enduring impasse in teaming multiple processors (sli & crossfire ~fails)... multiple cheap, easily cooled & efficient gpuS would have more drastic effect on gpu than zen did on cpu.

Specifically , (I am not competent in gpu tech, but) i have long suspected that Vega was prefered for renoir for secret reasons - not the timing factors officially stated.

There seem to be apps (scientific/math e.g.?) where Vega is preferred. Maybe its more suited to some even more tempting prize than better consumer gaming?

AI is changing the usual processing paradigms - the AI raw data is potentially so vast that consequent slow costly transmission any distance, make processing by mini nodes on the edge of the data storage much more attractive - ~decentralising.

Maybe banks of tightly integrated hybrid processor APU's are suited, & can form a big new market to add to their already broad appeal?

The patents would fit amd's MO very well - they love serving multiple markets with easily scaled variants of a few standard ingredients - or even better win a new tier or market w/ ~existing recipes. (the 3900 & 3950 12 & 16c zens paired w/ x570 mobos invaded, a big patch of workstation turf using desktop cpuS recently). Similarly, 64c & 2x ram TR has charged upscale into ~epyc turf.
 
Last edited:
That is just about what Intel was doing the last decade with their CPUs as a whole, and I don't remember you saying that was just fine.

Because they are not within the same context, I explained it subsequently, APUs need to remain cheap and not overlap with dedicated offerings (obviously) so there has to be a ceiling in terms of what you can expect which is much, much lower, compared to CPU performance at large where people are expected to pay even thousands of dollars. You are not going to see a 1000$ APU from AMD or Intel (not that sure about this one :rolleyes:) so you shouldn't expect the same push for advancement, it's all perfectly logical. Like that guy, you don't, or don't want to understand that the segment in which these things exist has many constraints that prohibit significant leaps in performance of the same order compared to other segments.

why is there no movement towards a Threadripper-sized socket with ditto chip that has lots of space for IGP? That would enable dGPU perf from the CPU socket right away, with lower clock and a much higher EU count, if you can keep it from burning up

Because as I explained, no one has a need for it. You can just buy a dedicated GPU that isn't thermally and power constrained as an iGPU would be, who buys into such an expensive platform, has a need for non trivial GPU power but for some inexplicable reason they don't want a dedicated card. We are talking about desktop PCs for Christ sake, not fully integrated systems like a console. There is really big disconnect between what thees products are for and what you guys understand they are for, nothing I can do about that.
 
Last edited:
Back
Top