- Joined
- Nov 3, 2007
- Messages
- 1,700 (0.27/day)
Looking forward to seeing what Vega has. Always like AMD's top gpu's.
System Name | Miami |
---|---|
Processor | Ryzen 3800X |
Motherboard | Asus Crosshair VII Formula |
Cooling | Ek Velocity/ 2x 280mm Radiators/ Alphacool fullcover |
Memory | F4-3600C16Q-32GTZNC |
Video Card(s) | XFX 6900 XT Speedster 0 |
Storage | 1TB WD M.2 SSD/ 2TB WD SN750/ 4TB WD Black HDD |
Display(s) | DELL AW3420DW / HP ZR24w |
Case | Lian Li O11 Dynamic XL |
Audio Device(s) | EVGA Nu Audio |
Power Supply | Seasonic Prime Gold 1000W+750W |
Mouse | Corsair Scimitar/Glorious Model O- |
Keyboard | Corsair K95 Platinum |
Software | Windows 10 Pro |
On the other hand, if you pull all the data from the Titan X review we see that both manufacturers see decreasing performance per flop as they go up the scale and it's worse for AMD:
View attachment 82981
So if we plot out all of these cards as Gflops vs Performance and fit a basic trend line we get that big Vega would be around 80% of Titan performance at 12 TFlops. (this puts it even with the 1080).
View attachment 82980
So for AMD to be at 1080TI levels, they'll need to have improved their card efficiency by 10 - 15 percent for this architecture.
Given the number of changes they've talked about with this architecture, I don't think that's infeasible but it is a hurdle to overcome.
Processor | Intel® Core™ i7-13700K |
---|---|
Motherboard | Gigabyte Z790 Aorus Elite AX |
Cooling | Noctua NH-D15 |
Memory | 32GB(2x16) DDR5@6600MHz G-Skill Trident Z5 |
Video Card(s) | ZOTAC GAMING GeForce RTX 3080 AMP Holo |
Storage | 2TB SK Platinum P41 SSD + 4TB SanDisk Ultra SSD + 500GB Samsung 840 EVO SSD |
Display(s) | Acer Predator X34 3440x1440@100Hz G-Sync |
Case | NZXT PHANTOM410-BK |
Audio Device(s) | Creative X-Fi Titanium PCIe |
Power Supply | Corsair 850W |
Mouse | Logitech Hero G502 SE |
Software | Windows 11 Pro - 64bit |
Benchmark Scores | 30FPS in NFS:Rivals |
What has been released vs the 1080 in the last year? Would you call that average?Not really these are targeted at the 1080Ti not 1080. This would be about average for AMD vs Nvidia release schedule. I do however find it to be a bit annoying on the hype train as per usual. I was expecting a Jan-Feb release.
System Name | Nero Mini |
---|---|
Processor | AMD Ryzen 7 5800X 4.7GHz-4.9GHz |
Motherboard | Gigabyte X570i Aorus Pro Wifi |
Cooling | Noctua NH-D15S+3x Noctua IPPC 3K |
Memory | Team Dark 3800MHz CL16 2x16GB 55ns |
Video Card(s) | Palit RTX 2060 Super JS Shunt Mod 2130MHz/1925MHz + 2x Noctua 120mm IPPC 3K |
Storage | Adata XPG Gammix S50 1TB |
Display(s) | LG 27UD68W |
Case | Lian-Li TU-150 |
Power Supply | Corsair SF750 Platinum |
Software | Windows 10 Pro |
Branch, there's a reason why Volta was pushed back (and Pascal didn't exist on engineering slides until a year and a bit before release). Vulkan\DX12 caught Nvidia with their pants down, so Pascal ended up being a Maxwell+ arch to use as a stepping whilst Volta is rearchitected.
So if we plot out all of these cards as Gflops vs Performance and fit a basic trend line we get that big Vega would be around 80% of Titan performance at 12 TFlops. (this puts it even with the 1080).
System Name | eazen corp | Xentronon 7.2 |
---|---|
Processor | AMD Ryzen 7 3700X // PBO max. |
Motherboard | Asus TUF Gaming X570-Plus |
Cooling | Noctua NH-D14 SE2011 w/ AM4 kit // 3x Corsair AF140L case fans (2 in, 1 out) |
Memory | G.Skill Trident Z RGB 2x16 GB DDR4 3600 @ 3800, CL16-19-19-39-58-1T, 1.4 V |
Video Card(s) | Asus ROG Strix GeForce RTX 2080 Ti modded to MATRIX // 2000-2100 MHz Core / 1938 MHz G6 |
Storage | Silicon Power P34A80 1TB NVME/Samsung SSD 830 128GB&850 Evo 500GB&F3 1TB 7200RPM/Seagate 2TB 5900RPM |
Display(s) | Samsung 27" Curved FS2 HDR QLED 1440p/144Hz&27" iiyama TN LED 1080p/120Hz / Samsung 40" IPS 1080p TV |
Case | Corsair Carbide 600C |
Audio Device(s) | HyperX Cloud Orbit S / Creative SB X AE-5 @ Logitech Z906 / Sony HD AVR @PC & TV @ Teufel Theater 80 |
Power Supply | EVGA 650 GQ |
Mouse | Logitech G700 @ Steelseries DeX // Xbox 360 Wireless Controller |
Keyboard | Corsair K70 LUX RGB /w Cherry MX Brown switches |
VR HMD | Still nope |
Software | Win 10 Pro |
Benchmark Scores | 15 095 Time Spy | P29 079 Firestrike | P35 628 3DM11 | X67 508 3DM Vantage Extreme |
System Name | All the cores |
---|---|
Processor | 2990WX |
Motherboard | Asrock X399M |
Cooling | CPU-XSPC RayStorm Neo, 2x240mm+360mm, D5PWM+140mL, GPU-2x360mm, 2xbyski, D4+D5+100mL |
Memory | 4x16GB G.Skill 3600 |
Video Card(s) | (2) EVGA SC BLACK 1080Ti's |
Storage | 2x Samsung SM951 512GB, Samsung PM961 512GB |
Display(s) | Dell UP2414Q 3840X2160@60hz |
Case | Caselabs Mercury S5+pedestal |
Audio Device(s) | Fischer HA-02->Fischer FA-002W High edition/FA-003/Jubilate/FA-011 depending on my mood |
Power Supply | Seasonic Prime 1200w |
Mouse | Thermaltake Theron, Steam controller |
Keyboard | Keychron K8 |
Software | W10P |
What has been released vs the 1080 in the last year? Would you call that average?
System Name | Miami |
---|---|
Processor | Ryzen 3800X |
Motherboard | Asus Crosshair VII Formula |
Cooling | Ek Velocity/ 2x 280mm Radiators/ Alphacool fullcover |
Memory | F4-3600C16Q-32GTZNC |
Video Card(s) | XFX 6900 XT Speedster 0 |
Storage | 1TB WD M.2 SSD/ 2TB WD SN750/ 4TB WD Black HDD |
Display(s) | DELL AW3420DW / HP ZR24w |
Case | Lian Li O11 Dynamic XL |
Audio Device(s) | EVGA Nu Audio |
Power Supply | Seasonic Prime Gold 1000W+750W |
Mouse | Corsair Scimitar/Glorious Model O- |
Keyboard | Corsair K95 Platinum |
Software | Windows 10 Pro |
What are you defining "Performance" as? TFLOPS is in itself performance so these graphs make no sense to me.
The Vega cards replace the fury's not the 290/390 SKU's from what I have seen in the road map.
I honestly think AMD banked on Polaris clocking a good bit higher and being competitive with the 1070, but were horribly let down by GloFo. AMD has a weird gap yes, but I do not think it was Vega I think it was Polaris failing them.
System Name | All the cores |
---|---|
Processor | 2990WX |
Motherboard | Asrock X399M |
Cooling | CPU-XSPC RayStorm Neo, 2x240mm+360mm, D5PWM+140mL, GPU-2x360mm, 2xbyski, D4+D5+100mL |
Memory | 4x16GB G.Skill 3600 |
Video Card(s) | (2) EVGA SC BLACK 1080Ti's |
Storage | 2x Samsung SM951 512GB, Samsung PM961 512GB |
Display(s) | Dell UP2414Q 3840X2160@60hz |
Case | Caselabs Mercury S5+pedestal |
Audio Device(s) | Fischer HA-02->Fischer FA-002W High edition/FA-003/Jubilate/FA-011 depending on my mood |
Power Supply | Seasonic Prime 1200w |
Mouse | Thermaltake Theron, Steam controller |
Keyboard | Keychron K8 |
Software | W10P |
But considering they aren't doing anything to address the issue and are awfully quiet about it, seems like a self-defeating to me. I'm not interested in paying $$$$ for a Fury replacement nor in a 480 as it falls short of what I need from a card.
They've always had a gaming flagship & a separate (compute) flagship ever since the days of Fermi. That they've neutered DP on subsequent Titan's is something entirely different, the original Titan had excellent DP capabilities but every card that's followed had DP cut down massively & yet many call it a workstation card.Vulkan/DX12 is not the reason why pascal exist. also pascal is a bit more complicated than being a simple "maxwell+". nvidia did not want to repeat the same problem they have with kepler so nvidia actually end up making two version of pascal; compute pascal (GP100) and gaming pascal (GP102 and the rest). kepler excel at GPGPU related work especially DP but as a gaming architecture not so much. maxwell design probably the best design nvidia can come up with right now for gaming purpose.
System Name | Miami |
---|---|
Processor | Ryzen 3800X |
Motherboard | Asus Crosshair VII Formula |
Cooling | Ek Velocity/ 2x 280mm Radiators/ Alphacool fullcover |
Memory | F4-3600C16Q-32GTZNC |
Video Card(s) | XFX 6900 XT Speedster 0 |
Storage | 1TB WD M.2 SSD/ 2TB WD SN750/ 4TB WD Black HDD |
Display(s) | DELL AW3420DW / HP ZR24w |
Case | Lian Li O11 Dynamic XL |
Audio Device(s) | EVGA Nu Audio |
Power Supply | Seasonic Prime Gold 1000W+750W |
Mouse | Corsair Scimitar/Glorious Model O- |
Keyboard | Corsair K95 Platinum |
Software | Windows 10 Pro |
They've always had a gaming flagship & a separate (compute) flagship ever since the days of Fermi. That they've neutered DP on subsequent Titan's is something entirely different, the original Titan had excellent DP capabilities but every card that's followed had DP cut down massively & yet many call it a workstation card.
The GP102 & GP100 are different because of HBM2, Nvidia probably felt that they didn't need "nexgen" memory for their gaming flagship that or saving a few more $ was a better idea i.e. with GDDR5x & the single card cannot support these two competing memory technologies.
No one said it wasn't, but they could've gone the route of AMD & given 16/32 GB of HBM2 to 1080Ti/Titan & yet they didn't & IMO that's down a lot to costs.The reason they crippled it was that was Titan cutting into their own workstation graphics business. People aren't going to give up their hard earned when they don't have to and the original Titan presented a very viable alternative to their Quadro line, so Titan in that form had to go. I say it's self-preservation and not cost cutting.
System Name | Raccoon City |
---|---|
Processor | i3 4160 |
Motherboard | MSI z97s Krait |
Cooling | Custom loop |
Memory | Avexir Core DDR3 1600mhz |
Video Card(s) | Sapphire R9 295x2 |
Storage | 2 x Adata 240gb SSD RAID 0 + 2 x WD Black 500gb RAID 0 |
Display(s) | Bravia 32" 120hz |
Case | Thermaltake Core P5 |
Audio Device(s) | Radeon audio |
Power Supply | Fractal 600w |
Mouse | CM storm |
Keyboard | IBM model M |
I honestly think AMD banked on Polaris clocking a good bit higher and being competitive with the 1070, but were horribly let down by GloFo. AMD has a weird gap yes, but I do not think it was Vega I think it was Polaris failing them.
System Name | Games/internet/usage |
---|---|
Processor | I7 5820k 4.2 Ghz |
Motherboard | ASUS X99-A2 |
Cooling | custom water loop for cpu and gpu |
Memory | 16GiB Crucial Ballistix Sport 2666 MHz |
Video Card(s) | Radeon Rx 6800 XT |
Storage | Samsung XP941 500 GB + 1 TB SSD |
Display(s) | Dell 3008WFP |
Case | Caselabs Magnum M8 |
Audio Device(s) | Shiit Modi 2 Uber -> Matrix m-stage -> HD650 |
Power Supply | beQuiet dark power pro 1200W |
Mouse | Logitech MX518 |
Keyboard | Corsair K95 RGB |
Software | Win 10 Pro |
Only way I can think of this HBC (High Bandwidth Cache) to work is that driver would analyze per-game memory behavior and adapt operation accordingly, meaning the game performance would improve as you play it. Or via game profiles, much like the ones for CrossfireX. This way it would know what gets shuffled around regularly and what only needs rare access. This way they could stuff framebuffer and frequently accessed data in HBM2, less frequent in DDR4 and even less frequent on SSD. Question is, how efficiently can this be manipulated by driver without the need to predefine all this during game design...
System Name | Games/internet/usage |
---|---|
Processor | I7 5820k 4.2 Ghz |
Motherboard | ASUS X99-A2 |
Cooling | custom water loop for cpu and gpu |
Memory | 16GiB Crucial Ballistix Sport 2666 MHz |
Video Card(s) | Radeon Rx 6800 XT |
Storage | Samsung XP941 500 GB + 1 TB SSD |
Display(s) | Dell 3008WFP |
Case | Caselabs Magnum M8 |
Audio Device(s) | Shiit Modi 2 Uber -> Matrix m-stage -> HD650 |
Power Supply | beQuiet dark power pro 1200W |
Mouse | Logitech MX518 |
Keyboard | Corsair K95 RGB |
Software | Win 10 Pro |
Processor | Ryzen 9 7950X3D |
---|---|
Motherboard | MSI X670E MPG Carbon Wifi |
Cooling | Custom loop, 2x360mm radiator,Lian Li UNI, EK XRes140,EK Velocity2 |
Memory | 2x16GB G.Skill DDR5-6400 @ 6400MHz C32 |
Video Card(s) | EVGA RTX 3080 Ti FTW3 Ultra OC Scanner core +750 mem |
Storage | MP600 Pro 2TB,960 EVO 1TB,XPG SX8200 Pro 1TB,Micron 1100 2TB,1.5TB Caviar Green |
Display(s) | Alienware AW3423DWF, Acer XB270HU |
Case | LianLi O11 Dynamic White |
Audio Device(s) | Logitech G-Pro X Wireless |
Power Supply | EVGA P3 1200W |
Mouse | Logitech G502X Lightspeed |
Keyboard | Logitech G512 Carbon w/ GX Brown |
VR HMD | HP Reverb G2 (V2) |
Software | Win 11 |
The slide below is from GTC2014
The slide below is from GTC 2015
In 2013, Volta was present but it's HMC design stalled somewhat.
In 2014 (that would be almost 3 years ago, Volta had disappeared from the road map.
In 2015, Volta reappeared and it seems very much for a late 2017 release, though it's far enough to be 2018.
So really, Vega, DX12 etc has got nothing to do with Volta. The memory arrangement affected his position.
Please bear in mind that the only reason Titan X is £1100 is because AMD have NOTHING to touch it with. Nvidia couldn't care less about DX12 and Vulkan for it's GFX cards - their own mid range GP104 (not GP102 and not GP100) is still top dog.
Nvidia by your own definition (a revamped Maxwell masquerading as Pascal) don't even need to try to stay King of Cards.
Processor | Intel® Core™ i7-13700K |
---|---|
Motherboard | Gigabyte Z790 Aorus Elite AX |
Cooling | Noctua NH-D15 |
Memory | 32GB(2x16) DDR5@6600MHz G-Skill Trident Z5 |
Video Card(s) | ZOTAC GAMING GeForce RTX 3080 AMP Holo |
Storage | 2TB SK Platinum P41 SSD + 4TB SanDisk Ultra SSD + 500GB Samsung 840 EVO SSD |
Display(s) | Acer Predator X34 3440x1440@100Hz G-Sync |
Case | NZXT PHANTOM410-BK |
Audio Device(s) | Creative X-Fi Titanium PCIe |
Power Supply | Corsair 850W |
Mouse | Logitech Hero G502 SE |
Software | Windows 11 Pro - 64bit |
Benchmark Scores | 30FPS in NFS:Rivals |
A lot of "IF"s and "Could"s in there....I'm sure Vega will be great and a success. Why, because AMD changed a lot and they are surrounded by success rather than failure like before. If Ryzen can be a success, Vega can be too.
And yes, those drivers aren't even remotely optimal, with 10% faster speed than 1080 in Doom, and further optimizations and higher clockspeed (it was probably thermal throtteling in that closed cage and with barraged fans that don't help to get the heat out the case) it is indeed possible that it could rival Titan XP / GM102.
Processor | AMD Ryzen 9 5900X ||| Intel Core i7-3930K |
---|---|
Motherboard | ASUS ProArt B550-CREATOR ||| Asus P9X79 WS |
Cooling | Noctua NH-U14S ||| Be Quiet Pure Rock |
Memory | Crucial 2 x 16 GB 3200 MHz ||| Corsair 8 x 8 GB 1333 MHz |
Video Card(s) | MSI GTX 1060 3GB ||| MSI GTX 680 4GB |
Storage | Samsung 970 PRO 512 GB + 1 TB ||| Intel 545s 512 GB + 256 GB |
Display(s) | Asus ROG Swift PG278QR 27" ||| Eizo EV2416W 24" |
Case | Fractal Design Define 7 XL x 2 |
Audio Device(s) | Cambridge Audio DacMagic Plus |
Power Supply | Seasonic Focus PX-850 x 2 |
Mouse | Razer Abyssus |
Keyboard | CM Storm QuickFire XT |
Software | Ubuntu |
The algorithm you describe is not possible. In games, the part of the allocated memory that's not used in every frame would be the game would outside the bounds of the camera. But it's impossible for the GPU to know which parts will be used next.Only way I can think of this HBC (High Bandwidth Cache) to work is that driver would analyze per-game memory behavior and adapt operation accordingly, meaning the game performance would improve as you play it. Or via game profiles, much like the ones for CrossfireX. This way it would know what gets shuffled around regularly and what only needs rare access. This way they could stuff framebuffer and frequently accessed data in HBM2, less frequent in DDR4 and even less frequent on SSD. Question is, how efficiently can this be manipulated by driver without the need to predefine all this during game design...
The purpose of caching is to hide latency of a larger storage pool. The two basic prinicples are manual and automatic prefetching. Manual prefetching would require implementation in every game, but AMD has indicated they are talking about automatic prefetching. Automatic prefetching can only work if it's able to detect (linear) patterns in memory accesses. This can work well for certain compute workloads, where the data processed is just a long linear stream of data. It is however impossible to do with random accesses, like rendering. If they try to do this it will result in either unstable performance or popping resources, depending on how they handle missing data on the hardware side.Not necessarely. I'm talking direct memory access, not prefetching. Prefetching still relies on storing whatever data into main GPU memory based on some sort of prefetching algorithm (HBM2 in this case). HBC, from what I understand AMD's slides is mentioning direct, seamless access to these resources. Not sure how, but apparently it can be done. We'll know more when they actually release this thing.