• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

AMD Claims Radeon RX 6500M is Faster Than Intel Arc A370M Graphics

Joined
Dec 14, 2011
Messages
276 (0.06/day)
Processor 12900K @5.1all Pcore only, 1.23v
Motherboard MSI Edge
Cooling D15 Chromax Black
Memory 32GB 4000 C15
Video Card(s) 4090 Suprim X
Storage Various Samsung M.2s, 860 evo other
Display(s) Predator X27 / Deck (Nreal air) / LG C3 83
Case FD Torrent
Audio Device(s) Hifiman Ananda / AudioEngine A5+
Power Supply Seasonic Prime TX 1000W
Mouse Amazon finest (no brand)
Keyboard Amazon finest (no brand)
VR HMD Index
Benchmark Scores I got some numbers.
So intel has its work cut out for it on its software side, is what Im getting from these charts. At least thats what the sheer gap in the range is suggesting, going from 25% difference in one game to 125% difference in another is a bit wild.
 
Joined
Apr 2, 2022
Messages
10 (0.01/day)
It's not so bad!
In the worst case scenario desktop DG2-512 will have a little bit better 4K performance/TFlop ratio than VEGA 64 and the same will be true for desktop DG2-128 vs RX 470 regarding FHD/QHD performance/TFlop ratio.
Compared to Nvidia Ampere, desktop DG2-512 will have at MAX -5% 4K performance/TFlop vs RTX 3070 and DG2-128 will have better FHD performance/TFlop ratio vs RTX 3050 (DG2-256 would have been the same vs RTX 3050)
The above predictions are very specific and safe imo, they will not fail.
The performance issues are easily fixed with the correct pricing, they only have to avoid major (deal breaking) issues with their software/drivers and everything will be O.K.
EDIT: regarding transistor counts / performance ratios, yes it will be worse than RDNA2 but the design is more advanced and the ratio is influenced from the immature drivers that Intel will have (and don't forget that nearly all the market-developers optimizing their engines for RDNA2-consoles!
The 128 EU part has an estimated 3.1TFLOPS of SP compute power. The RX 470 has 4.9, as per their official specs. The RX 470 hovers anywhere between 2.1x-2.7x faster than the 96 EU unit. Now using Intel's current 96 EU XE part, we can safely add on 25% performance for the extra 33% EU's (it's obviously not a 1:1 ratio). Even if you give the 128EU DG2 the benefit of the doubt and say it boosts to 2.2ghz (very unlikely), and that the dedicated GDDR6 memory (on a sad 64-bit bus) would even add another 50% performance, the performance per TFLOP of the DG series is already sad. We also have no idea how intel's dedicated parts scale in terms of TDP. Just because it's on the same 6nm process doesn't mean they will have linear power curves.

Having said all this, TFLOP/performance ratio is a pretty crappy way to evaluate performance since each companies shader cores are different, therefore making the number almost irrelevant outside of inter-architectural comparisons. TMU's, ROP's, and by proxy, texture fill and pixel fill rates, are much more accurate ways to evaluate performance. Even still, it varies pretty wildly sometimes.

Also, comparing TFLOPS is useless with Nvidia now because they base their numbers on using FP and INT calculations at the same time -- which doesn't happen in gaming or general use scenarios. For example, the 2070 super has about 9-10TFLOPS of compute vs the RTX 3050's 9TFLOPS. If you cut the 3050's in half however, you get a closer comparison to traditional TFLOP calculations based on standard FP32 calculations. That would peg the 3050 at 4.5tflops with roughly performance around a 1660 ti.

To bring this all home, those numbers are not safe realistically because they are based on numbers that don't matter. The 512 EU GPU, with 4096 SP's, Will probably be matched by the 3060 ti, a card with 4864 shader cores (but cut that roughly in half) and 16 TFLOPS (again, roughly cut that in half). So Intel's highest end part will most likely match, or barely beat, a 2432 shader core part with 8TFLOPS of true fp32 computational power. If it can do it in the same TGP or TDP, then that's really all that matters. But it still shows that Nvidia and AMD's architecture are so far ahead at this point, Intel won't catch up anytime soon.

This isn't a personal attack or anything. It's just trying to explain that the math doesn't always equal expected results when taken at face value. I want intel to compete as much as anyone else but unfortunately I think we are going to see some pretty lackluster parts. Their ace-in-the-hole has to be pricing and feature-set. If XeSS is as good as it looks, and their media engine is also as good, it could be a pretty nice card at the right price.

Or it could end up sucking complete ass and having us wonder why we were excited at all. That's the fun.
 
Last edited:
Joined
Mar 21, 2016
Messages
2,508 (0.78/day)
Agreed, it's what I meant to reply to the question above what the transistors are there for.

To be honest, I thought this tweet was an April Fool initially, but I had missed the frame rate benchmarks at medium settings for those games that Intel has really released, I guess it's a serious thing, despite the kind of juvenile "FTW" hashtag.
At least they didn't attempt to nearly bankrupt Intel. I wouldn't blame them if they did though.
 
Joined
Sep 8, 2020
Messages
230 (0.14/day)
System Name Home
Processor 5950x
Motherboard Asrock Taichi x370
Cooling Thermalright True Spirit 140
Memory Patriot 32gb DDR4 3200mhz
Video Card(s) Sapphire Radeon RX 6700 10gb
Storage Too many to count
Display(s) U2518D+u2417h
Case Chieftec
Audio Device(s) onboard
Power Supply seasonic prime 1000W
Mouse Razer Viper
Keyboard Logitech
Software Windows 10
Gaming isn't everything, i bet they dedicated some serious space on that die for content creation crowd.
 
Joined
Apr 21, 2010
Messages
578 (0.11/day)
System Name Home PC
Processor Ryzen 5900X
Motherboard Asus Prime X370 Pro
Cooling Thermaltake Contac Silent 12
Memory 2x8gb F4-3200C16-8GVKB - 2x16gb F4-3200C16-16GVK
Video Card(s) XFX RX480 GTR
Storage Samsung SSD Evo 120GB -WD SN580 1TB - Toshiba 2TB HDWT720 - 1TB GIGABYTE GP-GSTFS31100TNTD
Display(s) Cooler Master GA271 and AoC 931wx (19in, 1680x1050)
Case Green Magnum Evo
Power Supply Green 650UK Plus
Mouse Green GM602-RGB ( copy of Aula F810 )
Keyboard Old 12 years FOCUS FK-8100
So AMD Radeon does have Intle's sample? They did test and have number?
 
Joined
Oct 27, 2020
Messages
818 (0.53/day)
The 128 EU part has an estimated 3.1TFLOPS of SP compute power. The RX 470 has 4.9, as per their official specs. The RX 470 hovers anywhere between 2.1x-2.7x faster than the 96 EU unit. Now using Intel's current 96 EU XE part, we can safely add on 25% performance for the extra 33% EU's (it's obviously not a 1:1 ratio). Even if you give the 128EU DG2 the benefit of the doubt and say it boosts to 2.2ghz (very unlikely), and that the dedicated GDDR6 memory (on a sad 64-bit bus) would even add another 50% performance, the performance per TFLOP of the DG series is already sad. We also have no idea how intel's dedicated parts scale in terms of TDP. Just because it's on the same 6nm process doesn't mean they will have linear power curves.

Having said all this, TFLOP/performance ratio is a pretty crappy way to evaluate performance since each companies shader cores are different, therefore making the number almost irrelevant outside of inter-architectural comparisons. TMU's, ROP's, and by proxy, texture fill and pixel fill rates, are much more accurate ways to evaluate performance. Even still, it varies pretty wildly sometimes.

Also, comparing TFLOPS is useless with Nvidia now because they base their numbers on using FP and INT calculations at the same time -- which doesn't happen in gaming or general use scenarios. For example, the 2070 super has about 9-10TFLOPS of compute vs the RTX 3050's 9TFLOPS. If you cut the 3050's in half however, you get a closer comparison to traditional TFLOP calculations based on standard FP32 calculations. That would peg the 3050 at 4.5tflops with roughly performance around a 1660 ti.

To bring this all home, those numbers are not safe realistically because they are based on numbers that don't matter. The 512 EU GPU, with 4096 SP's, Will probably be matched by the 3060 ti, a card with 4864 shader cores (but cut that roughly in half) and 16 TFLOPS (again, roughly cut that in half). So Intel's highest end part will most likely match, or barely beat, a 2432 shader core part with 8TFLOPS of true fp32 computational power. If it can do it in the same TGP or TDP, then that's really all that matters. But it still shows that Nvidia and AMD's architecture are so far ahead at this point, Intel won't catch up anytime soon.

This isn't a personal attack or anything. It's just trying to explain that the math doesn't always equal expected results when taken at face value. I want intel to compete as much as anyone else but unfortunately I think we are going to see some pretty lackluster parts. Their ace-in-the-hole has to be pricing and feature-set. If XeSS is as good as it looks, and their media engine is also as good, it could be a pretty nice card at the right price.

Or it could end up sucking complete ass and having us wonder why we were excited at all. That's the fun.
Sure I may be wrong, but I have confidence in my analysis because I have a good track record, of course you may disagree, you don't know me plus you have your own projections I guess.
With a lot of what you mentioned I agree and it's not in contrast to what or why I wrote in my post, let me explain:
1. I don't compare performance levels, I compare performance at specific resolution / TFlop (boost) ratios and it is only for the desktop parts.
2. This is not meant to demonstrate a method calculating performance levels for ARC or other architectures, since as you correct say it's meaningless knowing only the TF rating since there are many more characteristics comprising a GPU that this is fundamentally wrong (even in the same architectures you can see how much different results you can get with different pixel/texture/memory configurations, just look 6900XT vs 6800...) , my post is just a statement, a statement that things will not be so bad as some guys think after AMD's performance comparison, since if you look at it, desktop DG2-512 having only -5% 4K performance/TF ratio vs Nvidia's 3070 is not exactly bad, it will not be as good as RDNA2 but still a little bit better than VEGA 64. This does not mean that on its own the performance per terraflop achieved is great, as you point out Turing (first Nvidia design with concurrent TIPS/TFLOPS throughput) for example had much better performance per terraflop than Ampere/Pascal etc.
Well you may guess it by now but first I made my analysis for what ARC architecture might bring and then I choose competing models from AMD & Nvidia to paint a positive picture for ARC because we need a third player, if I wanted I could have picked other competing models from AMD & Nvidia to paint a bad picture, it's all smoke and mirrors, like what AMD is doing comparing transistor counts for A370M & 6500M and then suggesting that they achieved much greater performance although their design is smaller, it's deceiving because it creates a negative picture for their competitor with a comparison that it's not fair (the media engine is more advanced than AMD's and takes more space, also the bus is 96bit and this also take space, ARC supports matrix math and the throughput per SM/Xe Core is twice vs even Ampere devoting a lot of space for neural network/AI processing which takes also a lot of space and will not bare fruits for classic raster, etc) and the most important thing of course is Intel's driver immaturity and that every developer is optimizing their engines for RDNA2, so taking account all the above it's perfectly natural for Intel to have inferior performance while the design is bigger, it doesn't mean much essentially, just like the performance/TFlop comparison...
The numbers I presented does not state performance level directly, you have to choose a frequency for DG2-512 to find what I suggest regarding performance, for example at 2.25GHz it will be -21% slower than 3070 in this worst case scenario (lower even than your 3060Ti performance level suggestion) and DG2-384 at 3060 level, so not exactly so optimistic as the -5% 3070 perf/TFlop phrasing suggest (smoke and mirrors)
Again with the correct pricing (if we don't have driver/software major issues) everything will be fine.
 
Joined
Feb 20, 2019
Messages
8,523 (3.95/day)
System Name Bragging Rights
Processor Atom Z3735F 1.33GHz
Motherboard It has no markings but it's green
Cooling No, it's a 2.2W processor
Memory 2GB DDR3L-1333
Video Card(s) Gen7 Intel HD (4EU @ 311MHz)
Storage 32GB eMMC and 128GB Sandisk Extreme U3
Display(s) 10" IPS 1280x800 60Hz
Case Veddha T2
Audio Device(s) Apparently, yes
Power Supply Samsung 18W 5V fast-charger
Mouse MX Anywhere 2
Keyboard Logitech MX Keys (not Cherry MX at all)
VR HMD Samsung Oddyssey, not that I'd plug it into this though....
Software W10 21H1, barely
Benchmark Scores I once clocked a Celeron-300A to 564MHz on an Abit BE6 and it scored over 9000.
This is the intended design purpose of the 6500XT's Navi24 die; Efficiency for laptops.

I'm looking forward to seeing a budget thin&light with a Ryzen5 and 6500M - the IGP and dGPU were designed to work together, which is why the 6500XT is short on encoder/decoder/outputs - it'll basically be auxiliary shaders, TMUs, and ROPs for the IGP to draw on, all connected to GDDR6 so that the IGP isn't hampered by shared DDR4.

SLI/Crossfire died a long time ago, but I'm wondering if a Cezanne IGP with Navi24 will actually multi-GPU, or whether the IGP will be solely used for encode/decode/low-power GPU functions and the 6500M will do all the lifting when it's active.
 

aQi

Joined
Jan 23, 2016
Messages
646 (0.20/day)
I was quietly hoping Intel is going to bring something to the market and maybe there will be a third player, but no, hahahaha.
I was also expecting this to happen. Intel is ages behind in the GPU territory which was also apparent in their previous iGPU iterations.
Maybe they'll be able to do something with the compute performance, but game optimization is far, far away from adequate...
Coming from the milestone. This is pretty neat compared to what could be worst. Plus old games are not optimised for ARC gpus. These new gpus are simply rendering what is actually optimised on Amd and Nvidia gpus. Perhaps new titles and new set of drivers will be appriciated. Apparently Intel also introduced ATX 3.0 standards from which gpu world is more likely take their potential to the next level.
 
Joined
Feb 18, 2005
Messages
5,848 (0.80/day)
Location
Ikenai borderline!
System Name Firelance.
Processor Threadripper 3960X
Motherboard ROG Strix TRX40-E Gaming
Cooling IceGem 360 + 6x Arctic Cooling P12
Memory 8x 16GB Patriot Viper DDR4-3200 CL16
Video Card(s) MSI GeForce RTX 4060 Ti Ventus 2X OC
Storage 2TB WD SN850X (boot), 4TB Crucial P3 (data)
Display(s) 3x AOC Q32E2N (32" 2560x1440 75Hz)
Case Enthoo Pro II Server Edition (Closed Panel) + 6 fans
Power Supply Fractal Design Ion+ 2 Platinum 760W
Mouse Logitech G602
Keyboard Razer Pro Type Ultra
Software Windows 10 Professional x64
Other things that are faster than Intel graphics include, but are not limited to, a literal potato.

So what are all these many more transistors for? To produce more heat?
To justify Raja's paycheck.
 
Joined
Nov 15, 2020
Messages
945 (0.62/day)
System Name 1. Glasshouse 2. Odin OneEye
Processor 1. Ryzen 9 5900X (manual PBO) 2. Ryzen 9 7900X
Motherboard 1. MSI x570 Tomahawk wifi 2. Gigabyte Aorus Extreme 670E
Cooling 1. Noctua NH D15 Chromax Black 2. Custom Loop 3x360mm (60mm) rads & T30 fans/Aquacomputer NEXT w/b
Memory 1. G Skill Neo 16GBx4 (3600MHz 16/16/16/36) 2. Kingston Fury 16GBx2 DDR5 CL36
Video Card(s) 1. Asus Strix Vega 64 2. Powercolor Liquid Devil 7900XTX
Storage 1. Corsair Force MP600 (1TB) & Sabrent Rocket 4 (2TB) 2. Kingston 3000 (1TB) and Hynix p41 (2TB)
Display(s) 1. Samsung U28E590 10bit 4K@60Hz 2. LG C2 42 inch 10bit 4K@120Hz
Case 1. Corsair Crystal 570X White 2. Cooler Master HAF 700 EVO
Audio Device(s) 1. Creative Speakers 2. Built in LG monitor speakers
Power Supply 1. Corsair RM850x 2. Superflower Titanium 1600W
Mouse 1. Microsoft IntelliMouse Pro (grey) 2. Microsoft IntelliMouse Pro (black)
Keyboard Leopold High End Mechanical
Software Windows 11
Very interesting discussion with Robert Hallock from AMD (Technical Marketing) on KitGuru this morning. He compares the chip designs of Intel and AMD and where they are in their respective evolution paths.
 
Joined
Sep 17, 2014
Messages
22,914 (6.07/day)
Location
The Washing Machine
System Name Tiny the White Yeti
Processor 7800X3D
Motherboard MSI MAG Mortar b650m wifi
Cooling CPU: Thermalright Peerless Assassin / Case: Phanteks T30-120 x3
Memory 32GB Corsair Vengeance 30CL6000
Video Card(s) ASRock RX7900XT Phantom Gaming
Storage Lexar NM790 4TB + Samsung 850 EVO 1TB + Samsung 980 1TB + Crucial BX100 250GB
Display(s) Gigabyte G34QWC (3440x1440)
Case Lian Li A3 mATX White
Audio Device(s) Harman Kardon AVR137 + 2.1
Power Supply EVGA Supernova G2 750W
Mouse Steelseries Aerox 5
Keyboard Lenovo Thinkpad Trackpoint II
VR HMD HD 420 - Green Edition ;)
Software W11 IoT Enterprise LTSC
Benchmark Scores Over 9000
Gaming isn't everything, i bet they dedicated some serious space on that die for content creation crowd.

Mhm its Vega all over again

'The next Frontier' Raja must have thought at some point. I think he's a Trekkie at heart. It also explains the mental die size, after all, the Federation doesnt worry about money anymore.
 
Joined
Jun 19, 2020
Messages
108 (0.06/day)
AMD marketing does not bend numbers. Their benchmarks are never much-off reality. Sadly, I cannot say same about Intel.
 

ixi

Joined
Aug 19, 2014
Messages
1,451 (0.38/day)
Personally I do not believe Raja, there were few attempts from him on red side and all those times it was kinda upset in the end. At least they priced products cheap.
 
Joined
Mar 28, 2020
Messages
1,769 (1.01/day)
AMD marketing does not bend numbers. Their benchmarks are never much-off reality. Sadly, I cannot say same about Intel.
You know that marketing’s job is to make the product look good right? My point is that it may be true that while the numbers may be accurate, AMD may have cherry picked some of the tests such that they look better than competition.
 
Joined
May 20, 2020
Messages
1,393 (0.82/day)
All Intel Arc fps results are quite close together/practically of the same value which might indicate the roughness of intel graphics driver and so performance increases are yet to be gained with future drivers peradventure.
 
Joined
Sep 17, 2014
Messages
22,914 (6.07/day)
Location
The Washing Machine
System Name Tiny the White Yeti
Processor 7800X3D
Motherboard MSI MAG Mortar b650m wifi
Cooling CPU: Thermalright Peerless Assassin / Case: Phanteks T30-120 x3
Memory 32GB Corsair Vengeance 30CL6000
Video Card(s) ASRock RX7900XT Phantom Gaming
Storage Lexar NM790 4TB + Samsung 850 EVO 1TB + Samsung 980 1TB + Crucial BX100 250GB
Display(s) Gigabyte G34QWC (3440x1440)
Case Lian Li A3 mATX White
Audio Device(s) Harman Kardon AVR137 + 2.1
Power Supply EVGA Supernova G2 750W
Mouse Steelseries Aerox 5
Keyboard Lenovo Thinkpad Trackpoint II
VR HMD HD 420 - Green Edition ;)
Software W11 IoT Enterprise LTSC
Benchmark Scores Over 9000
All Intel Arc fps results are quite close together/practically of the same value which might indicate the roughness of intel graphics driver and so performance increases are yet to be gained with future drivers peradventure.
Too bad there's about ten times more games coming out than Intel has historically covered proper in its drivers.

And that's the baseline... they really need per title optimization to overtake Nvidia / AMD drivers that have a massive amount of customization in them.
2030, maybe... I reckon AMD fine wine is nothing compared to what Intel is gonna have to do.

If Intel is smart, they put full GPU control in the hands of the crowd and let them fix it for them. Bethesda style. Any other path is disaster, they will always be behind. Look at how long AMD needed to catch up from their GCN/ Fury > Vega 'dip'... over five years.

It echoes in the time to market they have now for Arc: this train won't stop, even if it seems to have slowed down with Pascal/Turing/Ampere being spaced apart further. Arc is starting the race already barely catching on.
 
Joined
May 2, 2017
Messages
7,762 (2.75/day)
Location
Back in Norway
System Name Hotbox
Processor AMD Ryzen 7 5800X, 110/95/110, PBO +150Mhz, CO -7,-7,-20(x6),
Motherboard ASRock Phantom Gaming B550 ITX/ax
Cooling LOBO + Laing DDC 1T Plus PWM + Corsair XR5 280mm + 2x Arctic P14
Memory 32GB G.Skill FlareX 3200c14 @3800c15
Video Card(s) PowerColor Radeon 6900XT Liquid Devil Ultimate, UC@2250MHz max @~200W
Storage 2TB Adata SX8200 Pro
Display(s) Dell U2711 main, AOC 24P2C secondary
Case SSUPD Meshlicious
Audio Device(s) Optoma Nuforce μDAC 3
Power Supply Corsair SF750 Platinum
Mouse Logitech G603
Keyboard Keychron K3/Cooler Master MasterKeys Pro M w/DSA profile caps
Software Windows 10 Pro
This is the intended design purpose of the 6500XT's Navi24 die; Efficiency for laptops.

I'm looking forward to seeing a budget thin&light with a Ryzen5 and 6500M - the IGP and dGPU were designed to work together, which is why the 6500XT is short on encoder/decoder/outputs - it'll basically be auxiliary shaders, TMUs, and ROPs for the IGP to draw on, all connected to GDDR6 so that the IGP isn't hampered by shared DDR4.

SLI/Crossfire died a long time ago, but I'm wondering if a Cezanne IGP with Navi24 will actually multi-GPU, or whether the IGP will be solely used for encode/decode/low-power GPU functions and the 6500M will do all the lifting when it's active.
This is what I'm looking forward to seeing as well. It's abundantly clear that the desktop 6500XT is a thoughtlessly thrown-together product based on a highly specific mobile-first design that is laser-focused on design efficiency - hence the narrow memory bus and PCIe bus, and relying on the 6000-series APU's encode/decode. That GPU die is designed only to pair with those APUs - and it should do really well for its price and power budget when paired like that. The desktop GPU is pushed ridiculously high to deliver some sort of "value" (don't get me started on its pricing...), when it should really have been called the 6400, clocked so that it ran off PCIe slot power only, and clost ... $150 at most? Then they could have made a cut-down 6600 for the 6500 tier and had a much more compelling product in that tier as well. But this just demonstrates that in a supply-constrained seller's market, you end up getting crap products unless you're in the most profitable/popular groups - and low end desktop GPUs ain't that.

I doubt they'll be trying any kind of mGPU though - it's just too flaky, to difficult to implement, and latency over PCIe negates the possibility of any type of transparent solution. I'm hoping/expecting this to change when we get MCM APUs though - hopefully with some sort of package-integrated bridge IF solution to cut its power draw.
 
Joined
Feb 20, 2019
Messages
8,523 (3.95/day)
System Name Bragging Rights
Processor Atom Z3735F 1.33GHz
Motherboard It has no markings but it's green
Cooling No, it's a 2.2W processor
Memory 2GB DDR3L-1333
Video Card(s) Gen7 Intel HD (4EU @ 311MHz)
Storage 32GB eMMC and 128GB Sandisk Extreme U3
Display(s) 10" IPS 1280x800 60Hz
Case Veddha T2
Audio Device(s) Apparently, yes
Power Supply Samsung 18W 5V fast-charger
Mouse MX Anywhere 2
Keyboard Logitech MX Keys (not Cherry MX at all)
VR HMD Samsung Oddyssey, not that I'd plug it into this though....
Software W10 21H1, barely
Benchmark Scores I once clocked a Celeron-300A to 564MHz on an Abit BE6 and it scored over 9000.
This is what I'm looking forward to seeing as well. It's abundantly clear that the desktop 6500XT is a thoughtlessly thrown-together product based on a highly specific mobile-first design that is laser-focused on design efficiency - hence the narrow memory bus and PCIe bus, and relying on the 6000-series APU's encode/decode. That GPU die is designed only to pair with those APUs - and it should do really well for its price and power budget when paired like that. The desktop GPU is pushed ridiculously high to deliver some sort of "value" (don't get me started on its pricing...), when it should really have been called the 6400, clocked so that it ran off PCIe slot power only, and clost ... $150 at most? Then they could have made a cut-down 6600 for the 6500 tier and had a much more compelling product in that tier as well. But this just demonstrates that in a supply-constrained seller's market, you end up getting crap products unless you're in the most profitable/popular groups - and low end desktop GPUs ain't that.

I doubt they'll be trying any kind of mGPU though - it's just too flaky, to difficult to implement, and latency over PCIe negates the possibility of any type of transparent solution. I'm hoping/expecting this to change when we get MCM APUs though - hopefully with some sort of package-integrated bridge IF solution to cut its power draw.
Makes more sense I guess.

So the 6500M will likely just be the core GPU functionality of 16CU and a GDDR6 controller and any other functions will be handled by the fully-featured IGP.

It's not going to be fast, but in terms of FPS/Watt I expect it to be at or near the top of the charts, and for a thin & light laptop, that's potentially the most important chart to win.
 
Joined
May 2, 2017
Messages
7,762 (2.75/day)
Location
Back in Norway
System Name Hotbox
Processor AMD Ryzen 7 5800X, 110/95/110, PBO +150Mhz, CO -7,-7,-20(x6),
Motherboard ASRock Phantom Gaming B550 ITX/ax
Cooling LOBO + Laing DDC 1T Plus PWM + Corsair XR5 280mm + 2x Arctic P14
Memory 32GB G.Skill FlareX 3200c14 @3800c15
Video Card(s) PowerColor Radeon 6900XT Liquid Devil Ultimate, UC@2250MHz max @~200W
Storage 2TB Adata SX8200 Pro
Display(s) Dell U2711 main, AOC 24P2C secondary
Case SSUPD Meshlicious
Audio Device(s) Optoma Nuforce μDAC 3
Power Supply Corsair SF750 Platinum
Mouse Logitech G603
Keyboard Keychron K3/Cooler Master MasterKeys Pro M w/DSA profile caps
Software Windows 10 Pro
Makes more sense I guess.

So the 6500M will likely just be the core GPU functionality of 16CU and a GDDR6 controller and any other functions will be handled by the fully-featured IGP.

It's not going to be fast, but in terms of FPS/Watt I expect it to be at or near the top of the charts, and for a thin & light laptop, that's potentially the most important chart to win.
Yeah, and unloading as much as possible to the iGPU also allows for keeping the dGPU power gated for as much time as possible, helping battery life. I expect both the 6500M and 6300M to deliver pretty great performance/volume/price in thin-and-light laptops, and a good user experience when not gaming. Haven't seen even a single design with those chips show up yet though.
 
Joined
Sep 18, 2017
Messages
199 (0.07/day)
Getting into the GPU market was never going to be a quick process for Intel and to think they would be industry leaders with their first iteration is naïve.

Hopefully enough people will buy their GPUs (probably wont be me) to create more competition.
 
Joined
Jul 9, 2015
Messages
3,413 (0.98/day)
System Name M3401 notebook
Processor 5600H
Motherboard NA
Memory 16GB
Video Card(s) 3050
Storage 500GB SSD
Display(s) 14" OLED screen of the laptop
Software Windows 10
Benchmark Scores 3050 scores good 15-20% lower than average, despite ASUS's claims that it has uber cooling.
  • Like
Reactions: ARF
Top