• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

Editorial On AMD's Raja Koduri RX Vega Tweetstorm

Joined
Sep 17, 2014
Messages
22,654 (6.05/day)
Location
The Washing Machine
System Name Tiny the White Yeti
Processor 7800X3D
Motherboard MSI MAG Mortar b650m wifi
Cooling CPU: Thermalright Peerless Assassin / Case: Phanteks T30-120 x3
Memory 32GB Corsair Vengeance 30CL6000
Video Card(s) ASRock RX7900XT Phantom Gaming
Storage Lexar NM790 4TB + Samsung 850 EVO 1TB + Samsung 980 1TB + Crucial BX100 250GB
Display(s) Gigabyte G34QWC (3440x1440)
Case Lian Li A3 mATX White
Audio Device(s) Harman Kardon AVR137 + 2.1
Power Supply EVGA Supernova G2 750W
Mouse Steelseries Aerox 5
Keyboard Lenovo Thinkpad Trackpoint II
VR HMD HD 420 - Green Edition ;)
Software W11 IoT Enterprise LTSC
Benchmark Scores Over 9000
I think I've said this, but Poland seems to be the only place where Broadwell a) exists and b) wasn't/isn't €100+ more than an i7K. In the vast majority of the market they were non existent, and getting an LGA1150 platform these days is just dumb, unless you get a good deal on an old system and somehow get a hold of the 5775c for less than the €200ish the i7K's usually go for these days.

i7 7700K in Netherlands comes in at around 315-329 EUR, not sure where you're pulling the 200 EUR number from, they've never been that cheap, that's the i5-K
i7 5775c hovers around the 360 EUR price point

Cheaper boards make up the difference, and cheaper DDR3 RAMs make the Broadwell build actually a bit cheaper, or you can use that money to buy faster ones. Its really 1=1 across the board, so that's why its still in my mind too :) Another real consideration is that the i7 7700k needs to be pushed to 5Ghz which will increase the cost of proper cooling + little guarantees + delid risk, while a typical 5775c OC @ 4.2 Ghz is very easy to achieve and runs a LOT cooler by default, while being on par in terms of gaming. Total cost of the Broadwell rig will likely be lower, and the OC more feasible.

Really the only thing less good about it is the platform - but then again I'm also fine with just SATA SSDs, they're fast enough tbh and have good price/GB.

Sorry this is really offtopic. :eek:

I'll give them til' Navi to see, as it seems Navi is the RTGs Ryzen. Vega is an awesome card. I was able to get 2, both at the $500 price tag. It is smooth on my non freesync panel. I'm currently undergoing a system overhaul on both my pcs and can't bench or test anything which sucks. But in due time i think Vega will be at least 10%faster given or take 3 to 6 months. Sadly AMD keeps repeating this cycle, i thought they learnt from the Polaris release and even so Ryzen. We all knew something was up when Vega was suppose to be released right after TR but the reviewers got the cards like literally 3 days before release date. Then being told to focus on the 56 and not the 64 being released first. Shameful on AMDs part. I just hope they get their act together sooner than later. I (we all) need more/better competition. I'll rock out with Vega until Navi shows itself though. Another sad part is because of AMDs marketing or lack there of even when they have the better option we as consumers don't support them. To some degree i feel us add commanders are partly to blame for this.

The trend with Ryzen is fundamentally breaking that mold but even then i think people are more tired of Intels games more than there actual appeal of Ryzen. Odd way to look at things and maybe even small minded of me, but i can't not think about the Athlon Era of cpus. AMD clearly had the better product yet ppl willingly bought Intel. Same goes for the 5870/5970 Era of gpus. They were the best at just about every level yet consumers still bought Nvidia.

We are to blame for making AMD suck? Naahh its the other way around. AMD sells us 'tomorrow'. Nvidia and Intel sell us 'today' - and by the time AMD's 'tomorrow' proposition has landed, Nvidia and Intel have moved on to a new 'today'. Of course we have seen launch problems with every company, but with AMD, its a guarantee, and you get the fine wine argument along with that. This combination of drawbacks just doesn't instill confidence in the brand.

Ryzen had a similar issue, and most people called that 'a good launch for AMD'. Go figure! Gaming performance was all over the place, boards weren't ready, RAM was and still is a special breed. Yes yes, new platform and all. But again, these things don't help the brand at all. If its an exception (like Intel's X299 kneejerk response with a completely screwed up product stack), you'll just be surprised at it. If it becomes the rule, many quickly avoid it.

Ryzen could also have been sold so much better - imagine the difference if AMD had been up front about Ryzens' pretty hard 4 Ghz wall, and limited OC potential, but then went on to sell Ryzen on its extreme efficiency and core count advantage below or AT the 4 Ghz wall. They didn't do this explicitly at all, we had to analyze reviews to get there ourselves. I mean, ITS ON THE BOX. The TDPs are very good, especially for a competitor that is reintroducing itself to high end CPUs after a long time, that is and was a massive achievement. They just drop five slides with pricing, architecture and product stacks and some one-liners, and then drop the ball. Every. Single. Fucking. Time. Just releasing one solid product after half a decade does not work to instill confidence in a brand once its been lost. They will have to remain consistent with solid releases and SELL those releases to the public, and Vega's release doesn't help this at all.

Really the longer I watch them, the more I am convinced AMD is its own greatest competitor.
 
Last edited:
Joined
Sep 11, 2015
Messages
624 (0.18/day)
Really the longer I watch them, the more I am convinced AMD is its own greatest competitor.
Just shut up, ok? AMD is our only chance for 4K gaming sooner rather than later. At least they are trying to provide competition against the two top companies in the PC arena. If all we had were Intel and Nvidia, they'd be milking 1440p for the next 10 years. All these two companies care about is money and status quo you call "today" like a lamb. At least AMD tries something new. Trying new things will always lead to some failure, but don't worry, sooner or later it will pay off.
 
Joined
May 24, 2007
Messages
1,116 (0.17/day)
Location
Florida
System Name Blackwidow/
Processor Ryzen 5950x / Threadripper 3960x
Motherboard Asus x570 Crosshair viii impact/ Asus Zenith ii Extreme
Cooling Ek 240Aio/Custom watercooling
Memory 32gb ddr4 3600MHZ Crucial Ballistix / 32gb ddr4 3600MHZ G.Skill TridentZ Royal
Video Card(s) MSI RX 6900xt/ XFX 6800xt
Storage WD SN850 1TB boot / Samsung 970 evo+ 1tb boot, 6tb WD SN750
Display(s) Sony A80J / Dual LG 27gl850
Case Cooler Master NR200P/ 011 Dynamic XL
Audio Device(s) On board/ Soundblaster ZXR
Power Supply Corsair SF750w/ Seasonic Prime Titanium 1000w
Mouse Razer Viper Ultimate wireless/ Logitech G Pro X Superlight
Keyboard Logitech G915 TKL/ Logitech G915 Wireless
Software Win 10 Pro
i7 7700K in Netherlands comes in at around 315-329 EUR, not sure where you're pulling the 200 EUR number from, they've never been that cheap, that's the i5-K
i7 5775c hovers around the 360 EUR price point

Cheaper boards make up the difference, and cheaper DDR3 RAMs make the Broadwell build actually a bit cheaper, or you can use that money to buy faster ones. Its really 1=1 across the board, so that's why its still in my mind too :) Another real consideration is that the i7 7700k needs to be pushed to 5Ghz which will increase the cost of proper cooling + little guarantees + delid risk, while a typical 5775c OC @ 4.2 Ghz is very easy to achieve and runs a LOT cooler by default, while being on par in terms of gaming. Total cost of the Broadwell rig will likely be lower, and the OC more feasible.

Really the only thing less good about it is the platform - but then again I'm also fine with just SATA SSDs, they're fast enough tbh and have good price/GB.

Sorry this is really offtopic. :eek:



We are to blame for making AMD suck? Naahh its the other way around. AMD sells us 'tomorrow'. Nvidia and Intel sell us 'today' - and by the time AMD's 'tomorrow' proposition has landed, Nvidia and Intel have moved on to a new 'today'. Of course we have seen launch problems with every company, but with AMD, its a guarantee, and you get the fine wine argument along with that. This combination of drawbacks just doesn't instill confidence in the brand.

Ryzen had a similar issue, and most people called that 'a good launch for AMD'. Go figure! Gaming performance was all over the place, boards weren't ready, RAM was and still is a special breed. Yes yes, new platform and all. But again, these things don't help the brand at all. If its an exception (like Intel's X299 kneejerk response with a completely screwed up product stack), you'll just be surprised at it. If it becomes the rule, many quickly avoid it.

Ryzen could also have been sold so much better - imagine the difference if AMD had been up front about Ryzens' pretty hard 4 Ghz wall, and limited OC potential, but then went on to sell Ryzen on its extreme efficiency and core count advantage below or AT the 4 Ghz wall. They didn't do this explicitly at all, we had to analyze reviews to get there ourselves. I mean, ITS ON THE BOX. The TDPs are very good, especially for a competitor that is reintroducing itself to high end CPUs after a long time, that is and was a massive achievement. They just drop five slides with pricing, architecture and product stacks and some one-liners, and then drop the ball. Every. Single. Fucking. Time. Just releasing one solid product after half a decade does not work to instill confidence in a brand once its been lost. They will have to remain consistent with solid releases and SELL those releases to the public, and Vega's release doesn't help this at all.

Really the longer I watch them, the more I am convinced AMD is its own greatest competitor.
While i agree with some of your points, we are still part of the blame, the 4800/5800 cards were way better than the nvidia counterparts and cheaper. What did us consumers do, still bought nvidia. AMD never sold us an overclockers dream. People with high hopes caused their own let down. We see it here in our own forum, Ryzen decimates just about the whole i7 line up ranging from $300-700 yet ppl in these same forums still wouldn't support AMD to save their own lives. If consumers want to pay $1700 for a 10 core processor then so be it...I'm out. I will not support that 1 bit. If AMD had the same budget as Nv/Intel I'm quite sure they'd pull off similar feats. You guys are failing to realize that this "lil" company is taking on 2 big giants at the same time with nowhere those companies money at hand.

At the end of the day is Vega hot? Yes. Is it competing with its target (1070/1080)? Yes. Despite all the issues/controversy surrounding Vega is it selling? Yes.
Is Ryzen selling? Yes.

It's a consumers market, we determine a products outcome. People flocked for the original titan @$1000 guess what Nvidia did? Increase the new one to $1200. Did ppl still buy it? Yup.
 
Last edited:
Joined
Dec 17, 2010
Messages
22 (0.00/day)
Location
Paris, France (till 2019).
System Name Utopia Planitia
Processor AMD FX-8350 (Vishera) (Testing my new build AMD Ryzen 1800X) Will change this after :P
Motherboard MSI 970A-G46
Memory 4x Kingmax 8GB DDR3 1600MHz CL9
Video Card(s) 1x MSI Radeon RX 480, 1x XFX Radeon RX 480.
Storage SSD + 2x 2TB Seagate HDDs
Display(s) 3x BenQ GL2460
Case Irrelevant
Audio Device(s) Integrated :P
Power Supply Irrelevant since it works flawlesly
Mouse Irrelevant
Keyboard Irrelevant
Software Windows 10 x64
seems overpriced too me .. /pass ... will be getting A 1080 ti in A few months .

Better get it right now, since the price will increase with at least 30%.
 
Joined
Dec 31, 2009
Messages
19,371 (3.54/day)
Benchmark Scores Faster than yours... I'd bet on it. :)
Decimates intel with more cores for your dollar...3 years from now, this situation would be awesome. Too bad 4c/8t is perfectly fine for those next 3 years. Some prefer paying a premium for the faster ipc and better overclocking chip.

Not that it makes it much better, but the 7900x is $999, not $1700 ;)
 
Joined
Dec 17, 2010
Messages
22 (0.00/day)
Location
Paris, France (till 2019).
System Name Utopia Planitia
Processor AMD FX-8350 (Vishera) (Testing my new build AMD Ryzen 1800X) Will change this after :P
Motherboard MSI 970A-G46
Memory 4x Kingmax 8GB DDR3 1600MHz CL9
Video Card(s) 1x MSI Radeon RX 480, 1x XFX Radeon RX 480.
Storage SSD + 2x 2TB Seagate HDDs
Display(s) 3x BenQ GL2460
Case Irrelevant
Audio Device(s) Integrated :P
Power Supply Irrelevant since it works flawlesly
Mouse Irrelevant
Keyboard Irrelevant
Software Windows 10 x64
If I were Raja Koduri all I would have to say would be: "I am sorry. I've written my resignation" Have a nice day".
 
Joined
Aug 6, 2017
Messages
7,412 (2.75/day)
Location
Poland
System Name Purple rain
Processor 10.5 thousand 4.2G 1.1v
Motherboard Zee 490 Aorus Elite
Cooling Noctua D15S
Memory 16GB 4133 CL16-16-16-31 Viper Steel
Video Card(s) RTX 2070 Super Gaming X Trio
Storage SU900 128,8200Pro 1TB,850 Pro 512+256+256,860 Evo 500,XPG950 480, Skyhawk 2TB
Display(s) Acer XB241YU+Dell S2716DG
Case P600S Silent w. Alpenfohn wing boost 3 ARGBT+ fans
Audio Device(s) K612 Pro w. FiiO E10k DAC,W830BT wireless
Power Supply Superflower Leadex Gold 850W
Mouse G903 lightspeed+powerplay,G403 wireless + Steelseries DeX + Roccat rest
Keyboard HyperX Alloy SilverSpeed (w.HyperX wrist rest),Razer Deathstalker
Software Windows 10
Benchmark Scores A LOT
You must be hallucinating. Vega is a compute monster. It's just not good as good as geforce for gaming enthusiasts.
 
Joined
Oct 2, 2004
Messages
13,791 (1.87/day)
Decimates intel with more cores for your dollar...3 years from now, this situation would be awesome. Too bad 4c/8t is perfectly fine for those next 3 years. Some prefer paying a premium for the faster ipc and better overclocking chip.

Not that it makes it much better, but the 7900x is $999, not $1700 ;)

So is a Threadripper 1950X with 16 cores and 32 threads...
 
Joined
Jun 10, 2014
Messages
2,988 (0.78/day)
Processor AMD Ryzen 9 5900X ||| Intel Core i7-3930K
Motherboard ASUS ProArt B550-CREATOR ||| Asus P9X79 WS
Cooling Noctua NH-U14S ||| Be Quiet Pure Rock
Memory Crucial 2 x 16 GB 3200 MHz ||| Corsair 8 x 8 GB 1333 MHz
Video Card(s) MSI GTX 1060 3GB ||| MSI GTX 680 4GB
Storage Samsung 970 PRO 512 GB + 1 TB ||| Intel 545s 512 GB + 256 GB
Display(s) Asus ROG Swift PG278QR 27" ||| Eizo EV2416W 24"
Case Fractal Design Define 7 XL x 2
Audio Device(s) Cambridge Audio DacMagic Plus
Power Supply Seasonic Focus PX-850 x 2
Mouse Razer Abyssus
Keyboard CM Storm QuickFire XT
Software Ubuntu
Vega has about 10% higher IPC than Fiji with some 50% higher clockspeed to boot.
I guess you mean theoretically. In gaming Vega performs worse per clock than Fiji.
(Analogy: theoretically Ryzen has ~50% higher "IPC" than Skylake, but in real life Skylake IPC is much higher)
 

Frick

Fishfaced Nincompoop
Joined
Feb 27, 2006
Messages
19,661 (2.86/day)
Location
Piteå
System Name Black MC in Tokyo
Processor Ryzen 5 7600
Motherboard MSI X670E Gaming Plus Wifi
Cooling Be Quiet! Pure Rock 2
Memory 2 x 16GB Corsair Vengeance
Video Card(s) XFX 6950XT Speedster MERC 319
Storage Kingston KC3000 1TB | WD Black SN750 2TB |WD Blue 1TB x 2 | Toshiba P300 2TB | Seagate Expansion 8TB
Display(s) Samsung U32J590U 4K + BenQ GL2450HT 1080p
Case Fractal Design Define R4
Audio Device(s) Plantronics 5220, Nektar SE61 keyboard
Power Supply Corsair RM850x v3
Mouse Logitech G602
Keyboard Dell SK3205
Software Windows 11 Pro
Benchmark Scores Rimworld 4K ready!
i7 7700K in Netherlands comes in at around 315-329 EUR, not sure where you're pulling the 200 EUR number from, they've never been that cheap, that's the i5-K
i7 5775c hovers around the 360 EUR price point

I meant used Haswell i7Ks.
 
Joined
Aug 6, 2017
Messages
7,412 (2.75/day)
Location
Poland
System Name Purple rain
Processor 10.5 thousand 4.2G 1.1v
Motherboard Zee 490 Aorus Elite
Cooling Noctua D15S
Memory 16GB 4133 CL16-16-16-31 Viper Steel
Video Card(s) RTX 2070 Super Gaming X Trio
Storage SU900 128,8200Pro 1TB,850 Pro 512+256+256,860 Evo 500,XPG950 480, Skyhawk 2TB
Display(s) Acer XB241YU+Dell S2716DG
Case P600S Silent w. Alpenfohn wing boost 3 ARGBT+ fans
Audio Device(s) K612 Pro w. FiiO E10k DAC,W830BT wireless
Power Supply Superflower Leadex Gold 850W
Mouse G903 lightspeed+powerplay,G403 wireless + Steelseries DeX + Roccat rest
Keyboard HyperX Alloy SilverSpeed (w.HyperX wrist rest),Razer Deathstalker
Software Windows 10
Benchmark Scores A LOT
Not many ppl sell their 5775c in the first place. I know I won't sell mine in 2-3 years when I finally upgrade to moar cores. I'll use it for a console-killer HTPC to hook up to my living room TV along with downvolted GTX 1080.
 

cadaveca

My name is Dave
Joined
Apr 10, 2006
Messages
17,232 (2.52/day)
evernessince said:
"On top of all that, RTG locked Vega's BIOS. Meaning there will be no way to implement any actual BIOS based modifications. RX Vega is a failure no matter what way you spin the story."

Lol, no

https://www.techpowerup.com/236632/...lash-no-unlocked-shaders-improved-performance
https://forum.ethereum.org/discussion/15024/hows-it-hashin-vega-people

It hasn't even been very long and people can easily flash their Vega BIOS.

There were plenty of other things you could have shit on Vega for but you literally choose non-issues.


Flashing an unmodded BIOS is not modding the BIOS. :p Driver verifies that BIOS has valid checksum at boot. Modding Linux to make it work doesn't actually make it work...
 
Last edited:
Joined
Mar 18, 2008
Messages
5,717 (0.93/day)
System Name Virtual Reality / Bioinformatics
Processor Undead CPU
Motherboard Undead TUF X99
Cooling Noctua NH-D15
Memory GSkill 128GB DDR4-3000
Video Card(s) EVGA RTX 3090 FTW3 Ultra
Storage Samsung 960 Pro 1TB + 860 EVO 2TB + WD Black 5TB
Display(s) 32'' 4K Dell
Case Fractal Design R5
Audio Device(s) BOSE 2.0
Power Supply Seasonic 850watt
Mouse Logitech Master MX
Keyboard Corsair K70 Cherry MX Blue
VR HMD HTC Vive + Oculus Quest 2
Software Windows 10 P
[QUOTE="evernessince, post: 3719683, member: 165920
"On top of all that, RTG locked Vega's BIOS. Meaning there will be no way to implement any actual BIOS based modifications. RX Vega is a failure no matter what way you spin the story."

Lol, no

https://www.techpowerup.com/236632/...lash-no-unlocked-shaders-improved-performance
https://forum.ethereum.org/discussion/15024/hows-it-hashin-vega-people

It hasn't even been very long and people can easily flash their Vega BIOS.

There were plenty of other things you could have shit on Vega for but you literally choose non-issues.


Flashing an unmodded BIOS is not modding the BIOS. :p Driver verifies that BIOS has valid checksum at boot. Modding Linux to make it work doesn't actually make it work...[/QUOTE]


Don't burst his imagination bubble goat!
 
Joined
Aug 6, 2009
Messages
1,162 (0.21/day)
Location
Chicago, Illinois
Another real consideration is that the i7 7700k needs to be pushed to 5Ghz which will increase the cost of proper cooling + little guarantees + delid risk, while a typical 5775c OC @ 4.2 Ghz is very easy to achieve and runs a LOT cooler by default, while being on par in terms of gaming. Total cost of the Broadwell rig will likely be lower, and the OC more feasible.

Calling the i7 7700k the Vega of Intel is really way out of left field and makes no sense. The 7700k is actually a huge bang for the buck, had Intel not been retarded with the TIM they used under the lid it would have easily been the best processor for the money in a long time in my opinion. Also, I'm curious why you have it in your head that the 7700k "needs to be pushed to 5ghz"? Plenty of people run them at standard clocks and they work just fine. Anyhow, I like you said, this is off topic and I have no idea why you chose to discuss the 7700k really and try to smear it. Right now they're selling for $299.99 at Fry's Electronics. I bought mine for a bit over $300 almost a year ago?
 
Joined
Oct 28, 2012
Messages
1,194 (0.27/day)
Processor AMD Ryzen 3700x
Motherboard asus ROG Strix B-350I Gaming
Cooling Deepcool LS520 SE
Memory crucial ballistix 32Gb DDR4
Video Card(s) RTX 3070 FE
Storage WD sn550 1To/WD ssd sata 1To /WD black sn750 1To/Seagate 2To/WD book 4 To back-up
Display(s) LG GL850
Case Dan A4 H2O
Audio Device(s) sennheiser HD58X
Power Supply Corsair SF600
Mouse MX master 3
Keyboard Master Key Mx
Software win 11 pro
People keep buying Nvidia/Intel, because AMD is still being seen as the budget solution, it doesn't matter if the pentium 4 was hot and slow for such a high GHZ, or fermi hot and power hungry, those were still sold by the companies being seen as market leader. This effect is even worse when the buyer isn't a tech nerd.

For the average joe, the P4 with an higher clock had to be better, architecture optimization isn't something they are aware of. They also don't care/know about the fact that AMD was first on 64bit.
Intel also got a nice little eco-system going on :

Thunderbolt being intel locked means that apple won't go full AMD ever, and also means giving up on that on any ryzen based laptop, so no egpu with a Ryzen system.



As for nvidia, they got an habit of selling the fastest thing on earth wich do a lot for brand fidelity, and brand equity (geforce 256, GTX 8800, Titan) and when they are launching new "toys" they are doing it right:
  • 3D vision was better, easier to set up, 3D with AMD was a mess.
  • Phys-x may be a gimmick, but it's still a little extra nice to have,and with all the succesfull batman games having it, it was another reason to get an Nvidia card.
  • G-sync was first and heavily marketed, freesync came later and have poor marketing,
  • CUDA is easier to work with, the developpers get better support for every platform, meanwhile open cl can be really troublesome, some of the devellopers working on opencl cycle renderer for mac osx gave up out of disgust, because they couldn't get any support to fix an issue, open cl on mac osx is broken, and even tho apple was the creator of open cl they didn't care.
The fine wine argument isn't something marketable, and if they ditch GCN it may not be true anymore. When the GTX 980/ti were launched, it was a fact that they were the fastest gpu, that's how you are selling product to people, you don't say: in 2 years our product will go toes to toes with the old fastest gpu in the market.

AMD may have been the first to give HBM to the consummers, the fact that it wasn't the fastest thing on earth failed to impress part of the consummers. People in general are not reasonable, fully rational buyers, they like dreams, and buying something from the company selling the fastest gpu or the makers of the 1st cpu sounds better than buying the stuff from the budget company. Being seen as a premium brand does a lot, even if the client doesn't buy the high end product, they still feel like they are a part of it.



It wasn't long ago that i had to show some benchmark to a guy who was making a meh face when i mentionned that the new AMD cpu were good product. People don't know that AMD is back in the game, and all the high end laptop still rocking an Intel cpu/ Nvidia gpu won't help.
 
Joined
Feb 8, 2012
Messages
3,014 (0.64/day)
Location
Zagreb, Croatia
System Name Windows 10 64-bit Core i7 6700
Processor Intel Core i7 6700
Motherboard Asus Z170M-PLUS
Cooling Corsair AIO
Memory 2 x 8 GB Kingston DDR4 2666
Video Card(s) Gigabyte NVIDIA GeForce GTX 1060 6GB
Storage Western Digital Caviar Blue 1 TB, Seagate Baracuda 1 TB
Display(s) Dell P2414H
Case Corsair Carbide Air 540
Audio Device(s) Realtek HD Audio
Power Supply Corsair TX v2 650W
Mouse Steelseries Sensei
Keyboard CM Storm Quickfire Pro, Cherry MX Reds
Software MS Windows 10 Pro 64-bit
That's not a weakness.
Huh, where is this notion of weakness coming from? ... it's far from weak, just not as efficient ... and if you mean weakness as a shortcoming, sounds better if you call it a trade off
But NVIDIA doesn't really give us any details on what's inside the cores except the obvious.
I don't think anything is missing, all the parts are there although naming convention differs.
I guess you are wondering, for example, where is the branch predictor in nvidia arch ... there isn't one, warp is executing both parts of conditional statement (first "if" block and "else" block after) and portion of cuda cores sleep through the "if" part and other portion sleep through the "else" part. Brute forcing branching in shader code like this seems inefficient and a weakness but it's a trade off for simple and robust parallel design that saves die space and wins efficiency in total.
 
Joined
Jan 8, 2017
Messages
9,500 (3.27/day)
System Name Good enough
Processor AMD Ryzen R9 7900 - Alphacool Eisblock XPX Aurora Edge
Motherboard ASRock B650 Pro RS
Cooling 2x 360mm NexXxoS ST30 X-Flow, 1x 360mm NexXxoS ST30, 1x 240mm NexXxoS ST30
Memory 32GB - FURY Beast RGB 5600 Mhz
Video Card(s) Sapphire RX 7900 XT - Alphacool Eisblock Aurora
Storage 1x Kingston KC3000 1TB 1x Kingston A2000 1TB, 1x Samsung 850 EVO 250GB , 1x Samsung 860 EVO 500GB
Display(s) LG UltraGear 32GN650-B + 4K Samsung TV
Case Phanteks NV7
Power Supply GPS-750C
Basically it's a low-latency throughput favoring design that is wasteful and inflexible.

I don't know if that's true , higher instruction throughput is exactly what you want if you desire efficiency.

Actually that's exactly what you want in general with a GPU where you need to keep the core fed. The fact that Vega wins a lot more from memory bandwidth suggests there is nothing wrong with the way instructions are handled. Which isn't surprising at all , one of the biggest bottlenecks inside any of these GPUs was and still is memory bandwidth/latency. AMD should have kept the 4096-bit memory bus.

I don't think anything is missing, all the parts are there although naming convention differs.
I guess you are wondering, for example, where is the branch predictor in nvidia arch ... there isn't one, warp is executing both parts of conditional statement (first "if" block and "else" block after) and portion of cuda cores sleep through the "if" part and other portion sleep through the "else" part. Brute forcing branching in shader code like this seems inefficient and a weakness but it's a trade off for simple and robust parallel design that saves die space and wins efficiency in total.


Lack of branch prediction means more burden on compilers/drivers and more CPU overhead in general at run-time. And I would say that's not robust at all , in fact it's the reason why until Pascal they have been struggling with async. When they said Maxwell can do async but it just needs enabling , they were BSing. What they meant is that they can only do that through the help of the driver but that would have ended up being ridiculously inefficient and would have hurt performance most likely and that's why we never saw "a magic async enabling driver".

Nvidia offloaded a lot of the control logic on the driver side of things , meanwhile AMD relies almost entirely on the silicon itself to properly dispatch work. That should have been an advantage if anything for AMD , however Nvidia struck gold when they managed to support multi threaded draw calls automatically through their drivers with DX11 and AMD for some reason couldn't.

And this is why DX12/Vulkan was supposed to help AMD since now draw calls are handled exclusively by the programmer and Nvidia's drivers can't do jack now. But all we got were lousy implementations since it requires a lot of work. In fact it can hurt performance because even the few DX11 optimizations AMD had going are being nullified , see Rise of the Tomb Raider.

In the end my point is that the design language AMD uses for it's architecture is perfectly fine , it's just that they have to deal with an antiquated software landscape that isn't on the same page with them.
 
Last edited:
Joined
Jul 10, 2015
Messages
754 (0.22/day)
Location
Sokovia
System Name Alienation from family
Processor i7 7700k
Motherboard Hero VIII
Cooling Macho revB
Memory 16gb Hyperx
Video Card(s) Asus 1080ti Strix OC
Storage 960evo 500gb
Display(s) AOC 4k
Case Define R2 XL
Power Supply Be f*ing Quiet 600W M Gold
Mouse NoName
Keyboard NoNameless HP
Software You have nothing on me
Benchmark Scores Personal record 100m sprint: 60m
Got my Vega 64 in the first week they were sold. Waited 2 weeks more for the pre-ordered EK watercooler. Now its at >1600Mhz all the time and >50% faster than my Fury X before and reaches hardly 50°C. Im satisfied with the increase of gaming performance at 1440p so far. So, tnx AMD, you delivered.
1080ti almost doubles it, so 100% if you compare custom 1080ti vs Fury X. Satisfied? Maybe with temperature.
 
Joined
Jun 10, 2014
Messages
2,988 (0.78/day)
Processor AMD Ryzen 9 5900X ||| Intel Core i7-3930K
Motherboard ASUS ProArt B550-CREATOR ||| Asus P9X79 WS
Cooling Noctua NH-U14S ||| Be Quiet Pure Rock
Memory Crucial 2 x 16 GB 3200 MHz ||| Corsair 8 x 8 GB 1333 MHz
Video Card(s) MSI GTX 1060 3GB ||| MSI GTX 680 4GB
Storage Samsung 970 PRO 512 GB + 1 TB ||| Intel 545s 512 GB + 256 GB
Display(s) Asus ROG Swift PG278QR 27" ||| Eizo EV2416W 24"
Case Fractal Design Define 7 XL x 2
Audio Device(s) Cambridge Audio DacMagic Plus
Power Supply Seasonic Focus PX-850 x 2
Mouse Razer Abyssus
Keyboard CM Storm QuickFire XT
Software Ubuntu
I don't know if that's true , higher instruction throughput is exactly what you want if you desire efficiency.

Actually that's exactly what you want in general with a GPU where you need to keep the core fed. The fact that Vega wins a lot more from memory bandwidth suggests there is nothing wrong with the way instructions are handled. Which isn't surprising at all , one of the biggest bottlenecks inside any of these GPUs was and still is memory bandwidth/latency. AMD should have kept the 4096-bit memory bus.
Nonesene.
Memory bandwidth is not holding Vega back, it has the same bandwidth as GTX 1080 Ti. The problem is the GPU's scheduler and a longer pipeline.

Lack of branch prediction means more burden on compilers/drivers and more CPU overhead in general at run-time…
Nonesene.
Execution of shader code and low level scheduling is done in the GPU, not the CPU. Conditionals in the shader code doesn't stress the compiler or the driver. Start by learning the basics first.

Nvidia offloaded a lot of the control logic on the driver side of things , meanwhile AMD relies almost entirely on the silicon itself to properly dispatch work…
Nonsense.
The drivers translate a queue of API calls into native operations, the GPU schedules clusters, analyzes dependencies, etc. in real time.

And this is why DX12/Vulkan was supposed to help AMD since now draw calls are handled exclusively by the programmer and Nvidia's drivers can't do jack now.
Nonsense.
If you knew anything about graphics programming, you'd know the programmer has always controlled the draw calls.
 

FordGT90Concept

"I go fast!1!11!1!"
Joined
Oct 13, 2008
Messages
26,259 (4.44/day)
Location
IA, USA
System Name BY-2021
Processor AMD Ryzen 7 5800X (65w eco profile)
Motherboard MSI B550 Gaming Plus
Cooling Scythe Mugen (rev 5)
Memory 2 x Kingston HyperX DDR4-3200 32 GiB
Video Card(s) AMD Radeon RX 7900 XT
Storage Samsung 980 Pro, Seagate Exos X20 TB 7200 RPM
Display(s) Nixeus NX-EDG274K (3840x2160@144 DP) + Samsung SyncMaster 906BW (1440x900@60 HDMI-DVI)
Case Coolermaster HAF 932 w/ USB 3.0 5.25" bay + USB 3.2 (A+C) 3.5" bay
Audio Device(s) Realtek ALC1150, Micca OriGen+
Power Supply Enermax Platimax 850w
Mouse Nixeus REVEL-X
Keyboard Tesoro Excalibur
Software Windows 10 Home 64-bit
Benchmark Scores Faster than the tortoise; slower than the hare.
I don't think anything is missing, all the parts are there although naming convention differs.
Case in point: how many SIMD's does each CUDA core have? How many scalar operations can each core perform per clock?

Hmm... http://www.tomshardware.com/reviews/nvidia-cuda-gpu,1954-7.html

If I'm following this correctly. AMD has compute unit flexibility where CUDA only has flexibility on the warp level (64-512 threads). If CUDA is told to execute just a single SIMD operation on behalf of CUDA, 98.4-99.8% of the block will result in idle hardware. I suspect GCN is the opposite: the ACE will reserve a single compute unit to deal with that task while the rest of the GPU continues with other work.

The design philosophy is fundamentally opposite: GCN is about GPGPU where CUDA is about GPU with the ability to switch context to GPGPU. GCN demands saturation (like a CPU) where CUDA demands strict control (like a GPU).

Which is better going forward? Which should computers 10 years from now have? I think GCN because DirectPhysics is coming. We're finally going to have a reason to get serious about physics as a gameplay mechanic and NVIDIA can't get serious because of the massive penalty in framerate CUDA incurs when context switching. It makes far more sense for physics to occupy idle compute resources than disrupting the render wave front. CUDA, as it is designed now, will hold engines back from using physics as they should.
 
Joined
Sep 17, 2014
Messages
22,654 (6.05/day)
Location
The Washing Machine
System Name Tiny the White Yeti
Processor 7800X3D
Motherboard MSI MAG Mortar b650m wifi
Cooling CPU: Thermalright Peerless Assassin / Case: Phanteks T30-120 x3
Memory 32GB Corsair Vengeance 30CL6000
Video Card(s) ASRock RX7900XT Phantom Gaming
Storage Lexar NM790 4TB + Samsung 850 EVO 1TB + Samsung 980 1TB + Crucial BX100 250GB
Display(s) Gigabyte G34QWC (3440x1440)
Case Lian Li A3 mATX White
Audio Device(s) Harman Kardon AVR137 + 2.1
Power Supply EVGA Supernova G2 750W
Mouse Steelseries Aerox 5
Keyboard Lenovo Thinkpad Trackpoint II
VR HMD HD 420 - Green Edition ;)
Software W11 IoT Enterprise LTSC
Benchmark Scores Over 9000
Calling the i7 7700k the Vega of Intel is really way out of left field and makes no sense. The 7700k is actually a huge bang for the buck, had Intel not been retarded with the TIM they used under the lid it would have easily been the best processor for the money in a long time in my opinion. Also, I'm curious why you have it in your head that the 7700k "needs to be pushed to 5ghz"? Plenty of people run them at standard clocks and they work just fine. Anyhow, I like you said, this is off topic and I have no idea why you chose to discuss the 7700k really and try to smear it. Right now they're selling for $299.99 at Fry's Electronics. I bought mine for a bit over $300 almost a year ago?

There is nothing wrong with the CPU, but what I am looking for in my next upgrade is the most solid CPU for gaming - and only that. Have you seen the comparisons of OCd 7700k and 5775c next to each other, and their temps, and vCores? Its quite a gap and it shows that Intel has pushed core to/past its limit. The fact is, the 7700k needs some serious work for that similar-performing OC too.
 
Joined
May 24, 2007
Messages
1,116 (0.17/day)
Location
Florida
System Name Blackwidow/
Processor Ryzen 5950x / Threadripper 3960x
Motherboard Asus x570 Crosshair viii impact/ Asus Zenith ii Extreme
Cooling Ek 240Aio/Custom watercooling
Memory 32gb ddr4 3600MHZ Crucial Ballistix / 32gb ddr4 3600MHZ G.Skill TridentZ Royal
Video Card(s) MSI RX 6900xt/ XFX 6800xt
Storage WD SN850 1TB boot / Samsung 970 evo+ 1tb boot, 6tb WD SN750
Display(s) Sony A80J / Dual LG 27gl850
Case Cooler Master NR200P/ 011 Dynamic XL
Audio Device(s) On board/ Soundblaster ZXR
Power Supply Corsair SF750w/ Seasonic Prime Titanium 1000w
Mouse Razer Viper Ultimate wireless/ Logitech G Pro X Superlight
Keyboard Logitech G915 TKL/ Logitech G915 Wireless
Software Win 10 Pro
@efikkan Vega is definitely bandwidth starved. Increasing the memory clocks give you a better boat with lower power draw than doing so with the core.
 
Joined
Feb 8, 2012
Messages
3,014 (0.64/day)
Location
Zagreb, Croatia
System Name Windows 10 64-bit Core i7 6700
Processor Intel Core i7 6700
Motherboard Asus Z170M-PLUS
Cooling Corsair AIO
Memory 2 x 8 GB Kingston DDR4 2666
Video Card(s) Gigabyte NVIDIA GeForce GTX 1060 6GB
Storage Western Digital Caviar Blue 1 TB, Seagate Baracuda 1 TB
Display(s) Dell P2414H
Case Corsair Carbide Air 540
Audio Device(s) Realtek HD Audio
Power Supply Corsair TX v2 650W
Mouse Steelseries Sensei
Keyboard CM Storm Quickfire Pro, Cherry MX Reds
Software MS Windows 10 Pro 64-bit
Case in point: how many SIMD's does each CUDA core have? How many scalar operations can each core perform per clock?
Dude, cuda core is much more lightweight than that and it's completely scalar ... it's an execution unit that can retire 1 single precision floating point operation per clock (2 for a fused-multiply-add).
So basically it takes several of them to do a job of single SIMD unit in GCN ... this is what I meant by finer granularity
If CUDA is told to execute just a single SIMD operation on behalf of CUDA, 98.4-99.8% of the block will result in idle hardware.
Single SIMD operation still has multiple data, so it would saturate much more than that ... what you are describing is using SIMD as SISD (having data array consist of only 1 element), and in that context, yes, GCN would handle it more gracefully.
Nvidia's model is actually SIMT and its efficiency depends on having enough concurrent warps - even when those warps are not saturating SM enough by themselves, if there is enough of them for scheduler to choose from, all is well (good read http://yosefk.com/blog/simd-simt-smt-parallelism-in-nvidia-gpus.html)
The design philosophy is fundamentally opposite: GCN is about GPGPU where CUDA is about GPU with the ability to switch context to GPGPU. GCN demands saturation (like a CPU) where CUDA demands strict control (like a GPU).
Ditto. Nvidia kinda expects copious amount of SIMD/SIMT parallelism in their workload, because they are making GPUs.
With GPGPU it's always the question "How much general purpose you want it to be?"
Which is better going forward? Which should computers 10 years from now have? I think GCN because DirectPhysics is coming. We're finally going to have a reason to get serious about physics as a gameplay mechanic and NVIDIA can't get serious because of the massive penalty in framerate CUDA incurs when context switching. It makes far more sense for physics to occupy idle compute resources than disrupting the render wave front. CUDA, as it is designed now, will hold engines back from using physics as they should.
Yeah well, physics solvers are extremely dynamic in their load ... one frame nothing is happening, the next frame one collision sets hundreds of objects in motion. If you simply must update all positions by the end of the frame, and you do, async is the way to go. I guess that's why cuda physx is exclusively of a simd/simt kind (huge number of elements behaving by the same algorithm - fluid, hair, grass). Too bad we don't have some alpha version previewed somewhere by now since async compute is a thing.
 

FordGT90Concept

"I go fast!1!11!1!"
Joined
Oct 13, 2008
Messages
26,259 (4.44/day)
Location
IA, USA
System Name BY-2021
Processor AMD Ryzen 7 5800X (65w eco profile)
Motherboard MSI B550 Gaming Plus
Cooling Scythe Mugen (rev 5)
Memory 2 x Kingston HyperX DDR4-3200 32 GiB
Video Card(s) AMD Radeon RX 7900 XT
Storage Samsung 980 Pro, Seagate Exos X20 TB 7200 RPM
Display(s) Nixeus NX-EDG274K (3840x2160@144 DP) + Samsung SyncMaster 906BW (1440x900@60 HDMI-DVI)
Case Coolermaster HAF 932 w/ USB 3.0 5.25" bay + USB 3.2 (A+C) 3.5" bay
Audio Device(s) Realtek ALC1150, Micca OriGen+
Power Supply Enermax Platimax 850w
Mouse Nixeus REVEL-X
Keyboard Tesoro Excalibur
Software Windows 10 Home 64-bit
Benchmark Scores Faster than the tortoise; slower than the hare.
Intel abandoned Havok after they bought it so Microsoft has a lot of work to do before they debut it. Not just integrated into DirectX 12 but also make it competitive in the market with PhysX which has been getting constant updates for the last...almost decade. Quite depressing but it's something Microsoft can't afford to pass up.

It's going to take GPGPU to make physics mean something in games. Specifically, they need to prioritize the compute tasks and balance the GPU work with it. PhysX was always built off the foundation of having a separate card handling it. DirectPhysics can't do that.


Edit: https://trademarks.justia.com/871/86/directphysics-87186880.html
DIRECTPHYSICS/DIRECT PHYSICS said:
Computer software development tools; Computer software development tools for game programs, three-dimensional application programs and other application programs; Computer software, namely, software development kits (SDK); Computer software for use in the design, development and execution of game programs, three-dimensional interactive application programs, and other application programs; Computer software, namely, computer operating programs for games, three-dimensional interactive application programs, and other application programs; Computer software for developing and interfacing with virtual reality software, hardware and peripherals; Computer software, namely, 3D content and motion content rendering software for use in producing videos and animation
 
Last edited:
Joined
Oct 2, 2004
Messages
13,791 (1.87/day)
But I see a bright future if Havok sometime soon becomes DirectPhysics as part of DirectX, that's the moment we'll see huge leap in game physics. It won't be just ragdolls anymore. It'll actually be able to affect gameplay.
 
Top