- Joined
- Oct 27, 2009
- Messages
- 1,184 (0.21/day)
- Location
- Republic of Texas
System Name | [H]arbringer |
---|---|
Processor | 4x 61XX ES @3.5Ghz (48cores) |
Motherboard | SM GL |
Cooling | 3x xspc rx360, rx240, 4x DT G34 snipers, D5 pump. |
Memory | 16x gskill DDR3 1600 cas6 2gb |
Video Card(s) | blah bigadv folder no gfx needed |
Storage | 32GB Sammy SSD |
Display(s) | headless |
Case | Xigmatek Elysium (whats left of it) |
Audio Device(s) | yawn |
Power Supply | Antec 1200w HCP |
Software | Ubuntu 10.10 |
Benchmark Scores | http://valid.canardpc.com/show_oc.php?id=1780855 http://www.hwbot.org/submission/2158678 http://ww |
No, it is 128 PER CPU. AMD confirms it on their website. You have a total of 256 with 2 Epyc CPUs. Though, finding a way to USE all 256, that’ll require lots of hardware (but I’m sure some server users will find a way to use that many). Plus for second gen, its 128 PCIe 4.0 lanes per cpu. That’s yummy. Also, intel’s 56 core cpu is soldered, meaning you have to buy the motherboard. Can’t swap the cpu in case something happens. Even intel is making custom cooling solutions for it, depending on the U size of the server chassis. Whereas Epyc can be used in much more places.
No just no.... read before writing and learn.
The amendments on power are coming, more detailed reviews on power usage.Couple of notes:
Rome boards are designed with expectation of 250w/socket, either for milan or for turbo, reviews will tell.
128 lanes of PCIE 4 per cpu, or when configured in dual cpu mode half the lanes are coordinated as XGMI links which are x16 links but a more efficient protocol giving lower latency and higher bandwidth.
Server makers can opt to use 3 or 4 XGMI links giving an extra possible 32 lanes but that would sacrifice inter-socket bandwidth while increasing the needs for it. I think its a bad play as 128 pcie 4 lanes is a shitton of bandwidth...
Intel 9200 is BGA and boards and chips have to be bought from intel its a 200k sort of play without ram... and almost no one is buying first gen. It draws too much power, there is no differentiation to be had between vendors... it's just not a good thing. Intel has sort of listened and made a gen2 with cooperlake being socketed and upgradable to icelake.
Comparing 9200 and rome is not useful as it's not really in the market. Intel having 96 pcie 3.0 lanes vs 128-160 pcie 4.0 lanes is just an insane bandwidth difference. As far as server config is concerned I expect many single proc rome servers, and most dual proc to be configured with 3 xgmi links.
Intel will retail single threaded performance advantage in the server realm most likely, but will be dominated in anything that can use the insane amount of threads AMD is offering.
As far as what Keller is working on... he is VP of SOC and is working on die stacking and other vertical highly integrated density gains...
He claiming 50x density improvements over 10nm and it is "virtually working already"
225w is the official top sku, I see gigabyte allowing CTDP up to 240w.
What we do know is dual 64c use less than dual 28c by a healthy margin, and 1 64c is about all it takes to match or better dual 28c.
The 2020 "competition" is a socketed version of the 9200, so the bga will no longer be an issue, power probably still will be, or it won't be very competitive.
Currently on an AMD unoptimized path (not using even AVX2 which rome supports) Using AVX512 on Intel, a dual 8280 2x 10k chip will match a 2x 7k Rome setup, give rome AVX2 and that will never happen.
56-core $10000 ..64-cores $7000 yeah no brainer.
Nonono tech, its 10k for 28c ... these 56c chips are 20-40k each and you have to have 2 soldered down on an intel board...
Intel is going to have to offer 80% + discounts to sell chips.
Last edited: