• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

AMD "Greenland" Vega10 Silicon Features 4096 Stream Processors?

btarunr

Editor & Senior Moderator
Staff member
Joined
Oct 9, 2007
Messages
47,252 (7.54/day)
Location
Hyderabad, India
System Name RBMK-1000
Processor AMD Ryzen 7 5700G
Motherboard ASUS ROG Strix B450-E Gaming
Cooling DeepCool Gammax L240 V2
Memory 2x 8GB G.Skill Sniper X
Video Card(s) Palit GeForce RTX 2080 SUPER GameRock
Storage Western Digital Black NVMe 512GB
Display(s) BenQ 1440p 60 Hz 27-inch
Case Corsair Carbide 100R
Audio Device(s) ASUS SupremeFX S1220A
Power Supply Cooler Master MWE Gold 650W
Mouse ASUS ROG Strix Impact
Keyboard Gamdias Hermes E2
Software Windows 11 Pro
The LinkedIn profile of an R&D manager at AMD discloses key details of the company's upcoming "Greenland" graphics processor, which is also codenamed Vega10. Slated for an early-2017 launch, according to AMD's GPU architecture roadmap, "Greenland" will be built on AMD's "Vega" GPU architecture, which succeeds even the "Polaris" architecture, which is slated for later this year.

The LinkedIn profile of Yu Zheng, an R&D manager at AMD (now redacted), screencaptured by 3DCenter.org, reveals the "shader processor" (stream processor) count of Vega10 to be 4,096. This may look identical to the SP count of "Fiji," but one must take into account "Greenland" being two generations of Graphics CoreNext tech ahead of "Fiji," and that the roadmap slide hints at HBM2 memory, which could be faster. One must take into account AMD's claims of a 2.5X leap in performance-per-Watt over the current architecture with Polaris, so Vega could only be even faster.



In related news, AMD could be giving final touches to its first chips based on the "Polaris" architecture, a performance-segment chip codenamed "Ellesmere" or Polaris10, and a mid-range chip codenamed "Baffin" or Polaris11. "Ellesmere" is rumored to feature 36 GCN 4.0 compute units, which works out to 2,304 stream processors; and a 256-bit wide GDDR5 (or GDDR5X?) memory interface, with 8 GB standard memory amount. The specs of "Baffin" aren't as clear. The only specification doing rounds is its 128-bit wide GDDR5 memory bus. Products based on both these chips could launch in Q3, 2016.

View at TechPowerUp Main Site
 

the54thvoid

Super Intoxicated Moderator
Staff member
Joined
Dec 14, 2009
Messages
13,066 (2.39/day)
Location
Glasgow - home of formal profanity
Processor Ryzen 7800X3D
Motherboard MSI MAG Mortar B650 (wifi)
Cooling be quiet! Dark Rock Pro 4
Memory 32GB Kingston Fury
Video Card(s) Gainward RTX4070ti
Storage Seagate FireCuda 530 M.2 1TB / Samsumg 960 Pro M.2 512Gb
Display(s) LG 32" 165Hz 1440p GSYNC
Case Asus Prime AP201
Audio Device(s) On Board
Power Supply be quiet! Pure POwer M12 850w Gold (ATX3.0)
Software W10
Anyone not seeing the elephant in the room? Polaris releases late 2016, generously maybe by autumn/fall. A few months later, new architecture is released again. Makes zero sense unless they're doing a reverse Nvidia. Vega becomes the Titan style part, with Polaris 10 & 11 being the x80 and x70 parts.

FUD I say.
 
Joined
Sep 2, 2011
Messages
1,019 (0.21/day)
Location
Porto
System Name No name / Purple Haze
Processor Phenom II 1100T @ 3.8Ghz / Pentium 4 3.4 EE Gallatin @ 3.825Ghz
Motherboard MSI 970 Gaming/ Abit IC7-MAX3
Cooling CM Hyper 212X / Scythe Andy Samurai Master (CPU) - Modded Ati Silencer 5 rev. 2 (GPU)
Memory 8GB GEIL GB38GB2133C10ADC + 8GB G.Skill F3-14900CL9-4GBXL / 2x1GB Crucial Ballistix Tracer PC4000
Video Card(s) Asus R9 Fury X Strix (4096 SP's/1050 Mhz)/ PowerColor X850XT PE @ (600/1230) AGP + (HD3850 AGP)
Storage Samsung 250 GB / WD Caviar 160GB
Display(s) Benq XL2411T
Audio Device(s) motherboard / Creative Sound Blaster X-Fi XtremeGamer Fatal1ty Pro + Front panel
Power Supply Tagan BZ 900W / Corsair HX620w
Mouse Zowie AM
Keyboard Qpad MK-50
Software Windows 7 Pro 64Bit / Windows XP
Benchmark Scores 64CU Fury: http://www.3dmark.com/fs/11269229 / X850XT PE http://www.3dmark.com/3dm05/5532432
Anyone not seeing the elephant in the room? Polaris releases late 2016, generously maybe by autumn/fall. A few months later, new architecture is released again. Makes zero sense unless they're doing a reverse Nvidia. Vega becomes the Titan style part, with Polaris 10 & 11 being the x80 and x70 parts.

FUD I say.

These are my predicitons:

Polaris (10 and 11) - May / July - Expect at least R9 390 X class performance cards for cheap. (Much like the HD2900 XT to the HD 3870 transition) R9 470 and R9 480 parts.
Vega - ( September 2016 to January 2017) I expect these to be the R9 490 X cards.

Or you could tier these one up being Polaris 11/10 the 480 and 490 class of cards and Vega be the Fiji successor.
 
Joined
Dec 3, 2009
Messages
1,301 (0.24/day)
Location
The Netherlands
System Name PC ||Zephyrus G14 2023
Processor Ryzen 9 5900x || R9 7940HS @ 55W
Motherboard MAG B550M MORTAR WIFI || default
Cooling 1x Corsair XR5 360mm Rad||
Memory 2x16GB HyperX 3600 @ 3800 || 32GB DDR5 @ 4800MTs
Video Card(s) MSI RTX 2080Ti Sea Hawk EK X || RTX 4060 OC
Storage Samsung 9801TB x2 + Striped Tiered Storage Space (2x 128Gb SSD + 2x 1TB HDD) || 1TB NVME
Display(s) Iiyama PL2770QS + Samsung U28E590, || 14' 2560x1600 165Hz IPS
Case SilverStone Alta G1M ||
Audio Device(s) Asus Xonar DX
Power Supply Cooler Master V850 SFX || 240W
Mouse ROG Pugio II
Software Win 11 64bit || Win 11 64bit
Going by what Raja told Ryan from PCPer, I expect RTG to use two chips (on an interposer maybe?) on their top Vega part.
 
Joined
Sep 6, 2013
Messages
3,357 (0.82/day)
Location
Athens, Greece
System Name 3 desktop systems: Gaming / Internet / HTPC
Processor Ryzen 5 5500 / Ryzen 5 4600G / FX 6300 (12 years latter got to see how bad Bulldozer is)
Motherboard MSI X470 Gaming Plus Max (1) / MSI X470 Gaming Plus Max (2) / Gigabyte GA-990XA-UD3
Cooling Νoctua U12S / Segotep T4 / Snowman M-T6
Memory 32GB - 16GB G.Skill RIPJAWS 3600+16GB G.Skill Aegis 3200 / 16GB JUHOR / 16GB Kingston 2400MHz (DDR3)
Video Card(s) ASRock RX 6600 + GT 710 (PhysX)/ Vega 7 integrated / Radeon RX 580
Storage NVMes, ONLY NVMes/ NVMes, SATA Storage / NVMe boot(Clover), SATA storage
Display(s) Philips 43PUS8857/12 UHD TV (120Hz, HDR, FreeSync Premium) ---- 19'' HP monitor + BlitzWolf BW-V5
Case Sharkoon Rebel 12 / CoolerMaster Elite 361 / Xigmatek Midguard
Audio Device(s) onboard
Power Supply Chieftec 850W / Silver Power 400W / Sharkoon 650W
Mouse CoolerMaster Devastator III Plus / CoolerMaster Devastator / Logitech
Keyboard CoolerMaster Devastator III Plus / CoolerMaster Devastator / Logitech
Software Windows 10 / Windows 10&Windows 11 / Windows 10
I think I was expecting this. Who knows how many years companies will have to stay at 14nm. Who knows how much more expensive will be the transition to 10nm.

So they will give a 15-25% performance improvement, mostly from higher frequencies and architectural changes, for 3/4 of the power consumption thanks to the 14nm/16nm process, compared to today's models and that's it for this summer.

Add to that GDDR5/X and not HBM, the rumors that Polaris remains a feature level 12_0 card and Pascal still doesn't know what Async compute is, and we already start to question if this summer's models are the cards we are waiting for and not those that will come latter.
 
Joined
Oct 30, 2008
Messages
1,768 (0.30/day)
System Name Lailalo
Processor Ryzen 9 5900X Boosts to 4.95Ghz
Motherboard Asus TUF Gaming X570-Plus (WIFI
Cooling Noctua
Memory 32GB DDR4 3200 Corsair Vengeance
Video Card(s) XFX 7900XT 20GB
Storage Samsung 970 Pro Plus 1TB, Crucial 1TB MX500 SSD, Segate 3TB
Display(s) LG Ultrawide 29in @ 2560x1080
Case Coolermaster Storm Sniper
Power Supply XPG 1000W
Mouse G602
Keyboard G510s
Software Windows 10 Pro / Windows 10 Home
Anyone not seeing the elephant in the room? Polaris releases late 2016, generously maybe by autumn/fall. A few months later, new architecture is released again. Makes zero sense unless they're doing a reverse Nvidia. Vega becomes the Titan style part, with Polaris 10 & 11 being the x80 and x70 parts.

FUD I say.

Probably because most likely Polaris is just Fiji 2.0. Fury with a die shrink. Could be the entire Polaris line will just be variations of Fury with different stream processor or memory counts.

Things really don't seem to get interesting till Vega. The last time AMD hyped up "performance per watt" we got Fury and were underwhelmed. 2.5x means squat. It's just more of that hype train trying to make Polaris look real great when its Vega folks should be more keen on.

AMD needs something to get them through till next year. Polaris will likely do. I just don't feel so bad about having to jump on a 390 before I planned to spend.
 
Joined
Apr 8, 2008
Messages
341 (0.06/day)
System Name Xajel Main
Processor AMD Ryzen 7 5800X
Motherboard ASRock X570M Steel Legened
Cooling Corsair H100i PRO
Memory G.Skill DDR4 3600 32GB (2x16GB)
Video Card(s) ZOTAC GAMING GeForce RTX 3080 Ti AMP Holo
Storage (OS) Gigabyte AORUS NVMe Gen4 1TB + (Personal) WD Black SN850X 2TB + (Store) WD 8TB HDD
Display(s) LG 38WN95C Ultrawide 3840x1600 144Hz
Case Cooler Master CM690 III
Audio Device(s) Built-in Audio + Yamaha SR-C20 Soundbar
Power Supply Thermaltake 750W
Mouse Logitech MK710 Combo
Keyboard Logitech MK710 Combo (M705)
Software Windows 11 Pro
I think this relates also with the delay of HBM2, as both AMD and NV are choosing GDDR5X for their next gen. high-end... while in the same time, AMD thinks that HBM2 holds a great potential that they're eager to release a product with it as soon as possible...
 
Joined
Feb 8, 2012
Messages
3,014 (0.64/day)
Location
Zagreb, Croatia
System Name Windows 10 64-bit Core i7 6700
Processor Intel Core i7 6700
Motherboard Asus Z170M-PLUS
Cooling Corsair AIO
Memory 2 x 8 GB Kingston DDR4 2666
Video Card(s) Gigabyte NVIDIA GeForce GTX 1060 6GB
Storage Western Digital Caviar Blue 1 TB, Seagate Baracuda 1 TB
Display(s) Dell P2414H
Case Corsair Carbide Air 540
Audio Device(s) Realtek HD Audio
Power Supply Corsair TX v2 650W
Mouse Steelseries Sensei
Keyboard CM Storm Quickfire Pro, Cherry MX Reds
Software MS Windows 10 Pro 64-bit
Ah, R&D Manager spilling out company secrets on his LinkedIn profile ... I wish him good luck on the job market
 
Joined
Jul 18, 2007
Messages
2,693 (0.42/day)
System Name panda
Processor 6700k
Motherboard sabertooth s
Cooling raystorm block<black ice stealth 240 rad<ek dcc 18w 140 xres
Memory 32gb ripjaw v
Video Card(s) 290x gamer<ntzx g10<antec 920
Storage 950 pro 250gb boot 850 evo pr0n
Display(s) QX2710LED@110hz lg 27ud68p
Case 540 Air
Audio Device(s) nope
Power Supply 750w superflower
Mouse g502
Keyboard shine 3 with grey, black and red caps
Software win 10
Benchmark Scores http://hwbot.org/user/marsey99/
Vega becomes the Titan style part, with Polaris 10 & 11 being the x80 and x70 parts.


seems vega_num is their new internal code naming scheme.

but i think they want to switch to a more nvidia style too with a tier of card between the gamers cards and firepro too.

as for greenland next year, i doubt it unless polaris fails.
 

FordGT90Concept

"I go fast!1!11!1!"
Joined
Oct 13, 2008
Messages
26,259 (4.45/day)
Location
IA, USA
System Name BY-2021
Processor AMD Ryzen 7 5800X (65w eco profile)
Motherboard MSI B550 Gaming Plus
Cooling Scythe Mugen (rev 5)
Memory 2 x Kingston HyperX DDR4-3200 32 GiB
Video Card(s) AMD Radeon RX 7900 XT
Storage Samsung 980 Pro, Seagate Exos X20 TB 7200 RPM
Display(s) Nixeus NX-EDG274K (3840x2160@144 DP) + Samsung SyncMaster 906BW (1440x900@60 HDMI-DVI)
Case Coolermaster HAF 932 w/ USB 3.0 5.25" bay + USB 3.2 (A+C) 3.5" bay
Audio Device(s) Realtek ALC1150, Micca OriGen+
Power Supply Enermax Platimax 850w
Mouse Nixeus REVEL-X
Keyboard Tesoro Excalibur
Software Windows 10 Home 64-bit
Benchmark Scores Faster than the tortoise; slower than the hare.
4096 is pretty disappointing for 14nm. The only incentive you'd have to buy Vega10 would be power consumption compared to Fiji (and minor stuff like DP 1.3, HDMI 2.0a, D3D 12_1 feature level, etc.). They better have something more potent in the works (and not dual GPU) if they want to compete with NVIDIA's top cards. Vega10 should be a upper mid-range card.
 
Joined
Apr 30, 2011
Messages
2,707 (0.54/day)
Location
Greece
Processor AMD Ryzen 5 5600@80W
Motherboard MSI B550 Tomahawk
Cooling ZALMAN CNPS9X OPTIMA
Memory 2*8GB PATRIOT PVS416G400C9K@3733MT_C16
Video Card(s) Sapphire Radeon RX 6750 XT Pulse 12GB
Storage Sandisk SSD 128GB, Kingston A2000 NVMe 1TB, Samsung F1 1TB, WD Black 10TB
Display(s) AOC 27G2U/BK IPS 144Hz
Case SHARKOON M25-W 7.1 BLACK
Audio Device(s) Realtek 7.1 onboard
Power Supply Seasonic Core GC 500W
Mouse Sharkoon SHARK Force Black
Keyboard Trust GXT280
Software Win 7 Ultimate 64bit/Win 10 pro 64bit/Manjaro Linux
4096 is pretty disappointing for 14nm. The only incentive you'd have to buy Vega10 would be power consumption compared to Fiji (and minor stuff like DP 1.3, HDMI 2.0a, D3D 12_1 feature level, etc.). They better have something more potent in the works (and not dual GPU) if they want to compete with NVIDIA's top cards. Vega10 should be a upper mid-range card.
Vega 11 (or 9 if reversed naming) anyone? ;)
 
Joined
Jan 23, 2011
Messages
588 (0.12/day)
Location
St. Louis, MO
System Name Desktop
Processor AMD Ryzen 7800X3D
Motherboard MSI X670E MEG ACE
Cooling Corsair XC7 Block / Corsair XG7 Block EK 360PE Radiator EK 120XE Radiator 8x EK Vadar Furious Fans
Memory 64GB TeamGroup T-Create Expert DDR5-6000
Video Card(s) MSI RTX 4090 Gaming X Trio
Storage 1TB WD Black SN850 / 4TB Inland Premium / 8TB WD Black HDD
Display(s) Alienware AW3821DW / ASUS TUF VG279QM
Case Lian-Li Dynamic 011 XL ROG
Audio Device(s) Razer Nommo Pro Speakers / Creative AE-9 w/ Audio-Technica ATH-R70X
Power Supply EVGA P2 1200W Platinum
Mouse Razer Viper, Logitech G600
Keyboard Razer Huntsman Elite
Considering the games coming down the pipe and how the porting to PC has become the norm in some cases.... if you haven't bought a new GPU in the last 3 years, anything from the Fury line or the next two AMD lines should be more than enough to hold you over for awhile.

Same on the NV side, if you have a Maxwell or a Pascal and only at 1080p possibly 1440p, you should be good for awhile. I think GPUs are finally falling in line with CPUs, they hitting that size wall and are more about cutting down power/energy efficiency. Sure a die shrink will help performance but they are only going to get so fast.

Developers are lazy and don't really optimize their code. Hence the ports we have been getting on PC lately. They optimize for PS4 and XBone because the hardware is static. With PC they have all the different configurations and unless either side does something to get developers off their ass and actually code for the PC hardware, it is going to be the same dog and pony show it has been for the past 6-8 years.

We've had pretty awesome hardware for awhile now and it just isn't utilized properly.

For some of us we are still on first gen I7's and FX8350's. Unless you synthetic benchmark all day, you'd be hard pressed for a difference. Hardware is starting to get to a limit and finally all this bloat and laziness with developers is going to bite them in their ass.

Now I know some studios do much better than others with their optimization and getting their products to run on various platforms but, a fair majority don't.


I am not too worried though, my 2x Nanos should be more than plenty for awhile.
 
Joined
Jun 10, 2014
Messages
2,987 (0.78/day)
Processor AMD Ryzen 9 5900X ||| Intel Core i7-3930K
Motherboard ASUS ProArt B550-CREATOR ||| Asus P9X79 WS
Cooling Noctua NH-U14S ||| Be Quiet Pure Rock
Memory Crucial 2 x 16 GB 3200 MHz ||| Corsair 8 x 8 GB 1333 MHz
Video Card(s) MSI GTX 1060 3GB ||| MSI GTX 680 4GB
Storage Samsung 970 PRO 512 GB + 1 TB ||| Intel 545s 512 GB + 256 GB
Display(s) Asus ROG Swift PG278QR 27" ||| Eizo EV2416W 24"
Case Fractal Design Define 7 XL x 2
Audio Device(s) Cambridge Audio DacMagic Plus
Power Supply Seasonic Focus PX-850 x 2
Mouse Razer Abyssus
Keyboard CM Storm QuickFire XT
Software Ubuntu
With the top chip in Vega/"5th generation GCN" holding 4096 shader processors (the same as Fiji), let's at least hope that it brings greater architectural changes than Polaris, which AMD themselves considers a minor change (except for the shrink of course).

It may be a smart change for AMD to target the upper mid range rather than the high-end, after the Fiji blunder where they spent all the resources on a high-end product that didn't sell well. The $300-550 market is after all the most profitable market, and this should make AMD able to cover most of the market share they can cover with their limited resources.
 
Joined
Jun 10, 2014
Messages
2,987 (0.78/day)
Processor AMD Ryzen 9 5900X ||| Intel Core i7-3930K
Motherboard ASUS ProArt B550-CREATOR ||| Asus P9X79 WS
Cooling Noctua NH-U14S ||| Be Quiet Pure Rock
Memory Crucial 2 x 16 GB 3200 MHz ||| Corsair 8 x 8 GB 1333 MHz
Video Card(s) MSI GTX 1060 3GB ||| MSI GTX 680 4GB
Storage Samsung 970 PRO 512 GB + 1 TB ||| Intel 545s 512 GB + 256 GB
Display(s) Asus ROG Swift PG278QR 27" ||| Eizo EV2416W 24"
Case Fractal Design Define 7 XL x 2
Audio Device(s) Cambridge Audio DacMagic Plus
Power Supply Seasonic Focus PX-850 x 2
Mouse Razer Abyssus
Keyboard CM Storm QuickFire XT
Software Ubuntu
Considering the games coming down the pipe and how the porting to PC has become the norm in some cases.... if you haven't bought a new GPU in the last 3 years, anything from the Fury line or the next two AMD lines should be more than enough to hold you over for awhile.

Same on the NV side, if you have a Maxwell or a Pascal and only at 1080p possibly 1440p, you should be good for awhile. I think GPUs are finally falling in line with CPUs, they hitting that size wall and are more about cutting down power/energy efficiency. Sure a die shrink will help performance but they are only going to get so fast.
Unlike CPUs, GPUs can continue to scale efficiently provided we can put more shader cores on the dies, so for a while GPUs have benefited a lot from shrinks while CPUs have really not improved a lot since Sandy-Bridge. But as we all know, the shrinks are fewer and farther between. We might be looking at two shrinks in the next decade or so, and the benefits from the shrinks will also decline.

But in terms of demand, the demand is actually increasing at a higher rate than in the last ten years, since gamers now want higher resolutions and higher frame rates at the same time. And we are still not at the point where GPUs are "powerful enough" so game developers can achieve everything they want, and we can expect performance requirements to continue to increase for new games.

The jump from 28nm to 14/16nm is actually "two steps", except for the interconnect which is still at 20nm. So Pascal is probably going to be the largest performance gain we have seen for a long time, and we are probably not going to see a similar increase for Volta, and post-Volta. Currently a single GTX 980 Ti is still not "powerful enough" for 4K gaming, and is not close to 60 FPS in all games at stock speed. And for those who want higher frame rates, even though GTX 980 Ti is OK for 1440p, it's not enough to push 120-144 FPS in all games at stock speed. With Pascal probably increasing the gaming performance by >60%, we will still not be at 4K 120 FPS. If Volta(2018) is not going to be another shrink, then we might get as little as 20% more performance, which is not going to keep up with the demand.

Developers are lazy and don't really optimize their code. Hence the ports we have been getting on PC lately. They optimize for PS4 and XBone because the hardware is static. With PC they have all the different configurations and unless either side does something to get developers off their ass and actually code for the PC hardware, it is going to be the same dog and pony show it has been for the past 6-8 years.

We've had pretty awesome hardware for awhile now and it just isn't utilized properly.
You are touching a very important subject. Game developers have gotten used to performance leaps every two years or so, so by the time a game is released they expect people to by more powerful hardware than it was developed on. We all know that performance gains in hardware is going to decrease over time, so writing good code is going to become increasingly important.

The gaming consoles are a big problem, which uses outdated low-end hardware. And as long as developers keep making games for these machines and porting them to PCs by cranking up the model details, they are going to continue to suck. The current API call mania (Direct3D 12/Vulkan/etc.) is not going to help the situation. Every developer knows that batching the draws is the only efficient way to render, and when doing efficient batching the API overhead is low anyway.

Game engines are using way too much abstraction to use GPUs efficiently. If the API overhead is a problem for a game, then the engine CPU overhead is going to be even larger. Doing all kinds of GPU manipulation through thousands of API calls is a step backwards. Scaling with API calls is not going to work well with the raw performance of Pascal, Volta, post-Volta and so on.
 
Joined
Nov 4, 2005
Messages
11,988 (1.72/day)
System Name Compy 386
Processor 7800X3D
Motherboard Asus
Cooling Air for now.....
Memory 64 GB DDR5 6400Mhz
Video Card(s) 7900XTX 310 Merc
Storage Samsung 990 2TB, 2 SP 2TB SSDs, 24TB Enterprise drives
Display(s) 55" Samsung 4K HDR
Audio Device(s) ATI HDMI
Mouse Logitech MX518
Keyboard Razer
Software A lot.
Benchmark Scores Its fast. Enough.
More interested in the SOC/IP side of the conment, perhaps we will see the first hardware driver acceleration with compute and X86-64 core on a GPU? The time line fits, one or two HMA Zen cores coupled to a GPU die sharing resources? All the performance with no driver issues from different hardware configs, a simple scalable architecture.

Perhaps I'm all wrong here, but what else does SOC mean?
 

AsRock

TPU addict
Joined
Jun 23, 2007
Messages
19,091 (3.00/day)
Location
UK\USA
System\System on chip ?, but a few things but that's what i first think when the term is used
 
Joined
Oct 22, 2014
Messages
14,117 (3.82/day)
Location
Sunshine Coast
System Name H7 Flow 2024
Processor AMD 5800X3D
Motherboard Asus X570 Tough Gaming
Cooling Custom liquid
Memory 32 GB DDR4
Video Card(s) Intel ARC A750
Storage Crucial P5 Plus 2TB.
Display(s) AOC 24" Freesync 1m.s. 75Hz
Mouse Lenovo
Keyboard Eweadn Mechanical
Software W11 Pro 64 bit
Silicon On Chip?
 
Joined
Feb 8, 2012
Messages
3,014 (0.64/day)
Location
Zagreb, Croatia
System Name Windows 10 64-bit Core i7 6700
Processor Intel Core i7 6700
Motherboard Asus Z170M-PLUS
Cooling Corsair AIO
Memory 2 x 8 GB Kingston DDR4 2666
Video Card(s) Gigabyte NVIDIA GeForce GTX 1060 6GB
Storage Western Digital Caviar Blue 1 TB, Seagate Baracuda 1 TB
Display(s) Dell P2414H
Case Corsair Carbide Air 540
Audio Device(s) Realtek HD Audio
Power Supply Corsair TX v2 650W
Mouse Steelseries Sensei
Keyboard CM Storm Quickfire Pro, Cherry MX Reds
Software MS Windows 10 Pro 64-bit
Game engines are using way too much abstraction to use GPUs efficiently.
I'd say it's rather that game engines are using abstractions that allow very efficient use of GPUs if you know how to prepare content for it.
Modern engines batch draw calls automatically if different surfaces share textures, materials or shaders. When designing optimal art for 3d games it's all about reusing stuff while making it look like you are not reusing stuff.
Optimizing on a shader level is done one time for all eternity ... essentially there is one optimal "physical" based lighting shader all games use these days with diffuse, gloss/specular, emission, occlusion, normal and displacement textures (with additional detail diffuse+normal textures on top that are visible when close up) that allows sky based global illumination. Very little optimization room left there.
All optimization on cpu side is basically how to feed gpu command queues while minimizing number of context switching on gpu.
The hidden part that can make every game look unoptimized is occlusion culling algorithm which importance is often wrongly underestimated. Too many engines are used in a way they unnecessarily draw occluded objects.
The real problem starts when devs push consoles to the limits, while very much relying on low api overhead and low latencies heterogeneous memory design allows, only to reach locked 30 fps ... then do a straight port on pci-e bus induced latencies and higher overhead api. It may be feasible for jaguar core in ps4 to directly write something in video ram every frame, doing that over pci-e on pc would introduce extra latency.
Once you start using benefits of hsa on consoles, you get less scalable port to pc simply because of the modular nature of the pc.
The opposite way would be: develop optimally for pc, then leverage use of hsa on consoles to get acceptable performance... but I'm digressing and borderline rambling
Silicon On Chip?
It's system on chip, every soc is asic but not every asic is soc.
 
Joined
Jun 10, 2014
Messages
2,987 (0.78/day)
Processor AMD Ryzen 9 5900X ||| Intel Core i7-3930K
Motherboard ASUS ProArt B550-CREATOR ||| Asus P9X79 WS
Cooling Noctua NH-U14S ||| Be Quiet Pure Rock
Memory Crucial 2 x 16 GB 3200 MHz ||| Corsair 8 x 8 GB 1333 MHz
Video Card(s) MSI GTX 1060 3GB ||| MSI GTX 680 4GB
Storage Samsung 970 PRO 512 GB + 1 TB ||| Intel 545s 512 GB + 256 GB
Display(s) Asus ROG Swift PG278QR 27" ||| Eizo EV2416W 24"
Case Fractal Design Define 7 XL x 2
Audio Device(s) Cambridge Audio DacMagic Plus
Power Supply Seasonic Focus PX-850 x 2
Mouse Razer Abyssus
Keyboard CM Storm QuickFire XT
Software Ubuntu
I'd say it's rather that game engines are using abstractions that allow very efficient use of GPUs if you know how to prepare content for it.
Modern engines batch draw calls automatically if different surfaces share textures, materials or shaders. When designing optimal art for 3d games it's all about reusing stuff while making it look like you are not reusing stuff.
No, when game engines adds an abstraction layer above the actual API and creates a structure which calls upon each object to render itself we end up with the opposite of batching. As you can see with new games like Ashes of the Singularity keep bragging about the amount of API calls they are able to push through, which is evidence of inefficient coding.

If you try to render a bunch of meshes in a single API call the GPU works way more efficient than if you do it by thousands of small API calls. Even an old GTX 680 is able to render millions of polygons at the screen at a high frame rate, but no game is pushing through geometry at that level due to inefficient usage of the GPU.

All optimization on cpu side is basically how to feed gpu command queues while minimizing number of context switching on gpu.
Well, not quite. The GPU itself is way better at scheduling it's GPU threads/batches and even out the load, way better than an infinite powerful CPU could ever do.

The hidden part that can make every game look unoptimized is occlusion culling algorithm which importance is often wrongly underestimated. Too many engines are used in a way they unnecessarily draw occluded objects.
The GPUs themselves are to a large extent able to automatically cull a lot, that was actually one of the hardware improvements between GF100 and GF110.

Still both vertex shaders and compute shaders can be utilized for efficient culling. It's actually way more efficient to do the fine detailed culling on the GPU in the shader, rather than calculating it in the CPU and passing each part of a mesh as separate API calls.
 
Joined
Nov 4, 2005
Messages
11,988 (1.72/day)
System Name Compy 386
Processor 7800X3D
Motherboard Asus
Cooling Air for now.....
Memory 64 GB DDR5 6400Mhz
Video Card(s) 7900XTX 310 Merc
Storage Samsung 990 2TB, 2 SP 2TB SSDs, 24TB Enterprise drives
Display(s) 55" Samsung 4K HDR
Audio Device(s) ATI HDMI
Mouse Logitech MX518
Keyboard Razer
Software A lot.
Benchmark Scores Its fast. Enough.
As you can see with new games like Ashes of the Singularity keep bragging about the amount of API calls they are able to push through, which is evidence of inefficient coding.

If you try to render a bunch of meshes in a single API call the GPU works way more efficient than if you do it by thousands of small API calls. Even an old GTX 680 is able to render millions of polygons at the screen at a high frame rate, but no game is pushing through geometry at that level due to inefficient usage of the GPU.
Its not the number of API calls as much as the use of existing objects (instancing) being called on again without the call needing to ask the CPU for geometry information and cull status (batch process) to render the object, previous (to DX12/Mantle/Vulkan) the basic scene geometry was done on the CPU and the majority of culls and texture data was handled and shuffled to the GPU along with the scene render instructions from the CPU, which is exactly why a faster processor could render more FPS, and why benchmarking a CPU in smaller pixel counts (lower resolution) was a normally accepted way to do things, but modern GPU's are aware of texture location through drivers and hardware and can hold more textures in memory (onus on developer of the 3D application to have the correct textures in DX12) instead of drivers that have to be optimized for every game, making drivers 10% actual software, and 90% game optimization data in all versions prior.

You are mistaking hardware efficiency and muscle for poor coding, when its actually showing off the hardware and software improvements.
 
Joined
Oct 9, 2009
Messages
716 (0.13/day)
Location
Finland
System Name RGB-PC v2.0
Processor AMD Ryzen 7950X
Motherboard Asus Crosshair X670E Extreme
Cooling Corsair iCUE H150i RGB PRO XT
Memory 4x16GB DDR5-5200 CL36 G.SKILL Trident Z5 NEO RGB
Video Card(s) Asus Strix RTX 2080 Ti
Storage 2x2TB Samsung 980 PRO
Display(s) Acer Nitro XV273K 27" 4K 120Hz (G-SYNC compatible)
Case Lian Li O11 Dynamic EVO
Audio Device(s) Audioquest Dragon Red + Sennheiser HD 650
Power Supply Asus Thor II 1000W + Cablemod ModMesh Pro sleeved cables
Mouse Logitech G500s
Keyboard Corsair K70 RGB with low profile red cherrys
Software Windows 11 Pro 64-bit
Ah, R&D Manager spilling out company secrets on his LinkedIn profile ... I wish him good luck on the job market

There has been so many of these lately that I call the bluff and say some of it might be planted there in purpose to create hype. By this P&R team manager. :p
 
Joined
Oct 15, 2010
Messages
951 (0.18/day)
System Name Little Boy / New Guy
Processor AMD Ryzen 9 5900X / Intel Core I5 10400F
Motherboard Asrock X470 Taichi Ultimate / Asus H410M Prime
Cooling ARCTIC Liquid Freezer II 280 A-RGB / ARCTIC Freezer 34 eSports DUO
Memory TeamGroup Zeus 2x16GB 3200Mhz CL16 / Teamgroup 1x16GB 3000Mhz CL18
Video Card(s) Asrock Phantom RX 6800 XT 16GB / Asus RTX 3060 Ti 8GB DUAL Mini V2
Storage Patriot Viper VPN100 Nvme 1TB / OCZ Vertex 4 256GB Sata / Ultrastar 2TB / IronWolf 4TB / WD Red 8TB
Display(s) Compumax MF32C 144Hz QHD / ViewSonic OMNI 27 144Hz QHD
Case Phanteks Eclipse P400A / Montech X3 Mesh
Power Supply Aresgame 850W 80+ Gold / Aerocool 850W Plus bronze
Mouse Gigabyte Force M7 Thor
Keyboard Gigabyte Aivia K8100
Software Windows 10 Pro 64 Bits
So they will give a 15-25% performance improvement, mostly from higher frequencies and architectural changes, for 3/4 of the power consumption thanks to the 14nm/16nm process, compared to today's models and that's it for this summer.

I don't think anything below a 50% perf increase is worthwhile.
 
Top