• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

NVIDIA Details "Pascal" Some More at GTC Japan

Joined
Sep 7, 2011
Messages
2,785 (0.58/day)
Location
New Zealand
System Name MoneySink
Processor 2600K @ 4.8
Motherboard P8Z77-V
Cooling AC NexXxos XT45 360, RayStorm, D5T+XSPC tank, Tygon R-3603, Bitspower
Memory 16GB Crucial Ballistix DDR3-1600C8
Video Card(s) GTX 780 SLI (EVGA SC ACX + Giga GHz Ed.)
Storage Kingston HyperX SSD (128) OS, WD RE4 (1TB), RE2 (1TB), Cav. Black (2 x 500GB), Red (4TB)
Display(s) Achieva Shimian QH270-IPSMS (2560x1440) S-IPS
Case NZXT Switch 810
Audio Device(s) onboard Realtek yawn edition
Power Supply Seasonic X-1050
Software Win8.1 Pro
Benchmark Scores 3.5 litres of Pale Ale in 18 minutes.
While I appreciate the fact check, disabling PCI-e wasn't what I was trying to say. What I meant is developing a wholly new interface, and only offering a hand full of PCI-e interconnection.
Well, Intel could theoretically turn their back on a specification they basically pushed for, but how does that not affect every vendor not just committed to PCI-E (since 4.0 like previous versions is backwards compatible), but every vendor already preparing PCI-E 4.0 logic ? ( Seems kind of crappy to have vendors showing PCI-E 4.0 logic at an Intel Developers Forum if they planned on shafting them).
If they can demonstrate the ability to connect any card to their system via PCI-e bus it effectively means they're following the FTC's requirements to the letter of the law (if not the spirit).
The FTC's current mandate does not preclude further action (nor that of the EU or DoJ for that matter), as evidenced by the Consent Decree the FTC slapped on it last year.
Intel decides to go whole hog with PCE, and cut Nvidia out of the HPC market.
Really? I'm not sure how landing a share of a $325million contract and an ongoing partnership with IBM fits into that. ARM servers/HPC also use PCI-E, and are also specced for Nvidia GPGPU deployment
They allow AMD to cross-license the interconnect (under their sharing agreement for x86),
Well, that's not going to happen unless AMD bring some IP of similar value to the table. Why would Intel give away IP to a competitor (and I'm talking about HSA here), and why would AMD opt for licensing Intel IP when PCI-E is not only free, it is also used by all other HSA Foundation founder members- Samsung, ARM, Mediatek, Texas Instruments, and of course Qualcomm, whose new server chip business supports PCI-E....and that's without AMD alienating it's own installed discrete graphics user base.

If you don't mind me saying so, that sounds like a completely convoluted and fucked up way to screw over a small IHV. If Intel were that completely mental about putting Nvidia out of business wouldn't they just buy it?
 
Last edited:

rtwjunkie

PC Gaming Enthusiast
Supporter
Joined
Jul 25, 2008
Messages
13,993 (2.35/day)
Location
Louisiana
Processor Core i9-9900k
Motherboard ASRock Z390 Phantom Gaming 6
Cooling All air: 2x140mm Fractal exhaust; 3x 140mm Cougar Intake; Enermax ETS-T50 Black CPU cooler
Memory 32GB (2x16) Mushkin Redline DDR-4 3200
Video Card(s) ASUS RTX 4070 Ti Super OC 16GB
Storage 1x 1TB MX500 (OS); 2x 6TB WD Black; 1x 2TB MX500; 1x 1TB BX500 SSD; 1x 6TB WD Blue storage (eSATA)
Display(s) Infievo 27" 165Hz @ 2560 x 1440
Case Fractal Design Define R4 Black -windowed
Audio Device(s) Soundblaster Z
Power Supply Seasonic Focus GX-1000 Gold
Mouse Coolermaster Sentinel III (large palm grip!)
Keyboard Logitech G610 Orion mechanical (Cherry Brown switches)
Software Windows 10 Pro 64-bit (Start10 & Fences 3.0 installed)
LET THE MILLENNIALS AND AMD FB RAGE BEGIN!

Pascal will smoke everything out there

I'm not sure I would bet on that yet. Arctic Islands has just as much potential to smoke Pascal at this point.

In reality, I feel we will have rough parity in performance, which will be a plus for the beleaguered AMD.
 
Joined
Nov 2, 2013
Messages
466 (0.12/day)
System Name Auriga
Processor Ryzen 7950X3D w/ aquacomputer cuplex kryos NEXT with VISION - acrylic/nickel
Motherboard Asus ProArt X670E-CREATOR WIFI + Intel X520-DA2 NIC
Cooling Alphacool Res/D5 Combo •• Corsair XR7 480mm + Black Ice Nemesis 360GTS radiators •• 7xNF-A12 chromax
Memory 96GB G.Skill Trident Z5 RGB (F5-6400J3239F48GX2-TZ5RK)
Video Card(s) MSI RTX 4090 Suprim Liquid X w/ Bykski waterblock
Storage 2TB inland TD510 Gen5 ••• 4TB WD Black SN850X ••• 40TB UNRAID NAS
Display(s) Alienware AW3423DWF (3440x1440, 10-bit @ 145Hz)
Case Thermaltake Core P8
Power Supply Corsair AX1600i
Mouse Razer Viper V2 Pro (FPS games) + Logitech MX Master 2S (everything else)
Keyboard Keycult No2 rev 1 w/Amber Alps and TX stabilizers on a steel plate. DCS 9009 WYSE keycaps
Software W11 X64 Pro
Benchmark Scores https://valid.x86.fr/c3rxw7
It's also becoming clear that NVIDIA will build its "Pascal" chips on the 16 nanometer FinFET process (AMD will build its next-gen chips on more advanced 14 nm process). [/small]

oh look, more baseless BS. who says that the 14nm is more advanced? only intel and samjunk are making 14nm chips at this point so i'm assuming you are referring to the latter. you must not know about the iPhone SoC disaster... Apple sourced processors from TSMC and Samsung for the iPhone 6s. TSMC used 16nm FF and samsung used their shinny new 14nm process. samsung built SoCs are burning more power and getting hotter even though they are basically the same.

if AMD is having samjunk build their chips then I'm DEFINITELY going with nvidia again.
 
Joined
Apr 2, 2011
Messages
2,810 (0.56/day)
Well, Intel could theoretically turn their back on a specification they basically pushed for, but how does that not affect every vendor not just committed to PCI-E (since 4.0 like previous versions is backwards compatible), but every vendor already preparing PCI-E 4.0 logic ? ( Seems kind of crappy to have vendors showing PCI-E 4.0 logic at an Intel Developers Forum if they planned on shafting them).

The FTC's current mandate does not preclude further action (nor that of the EU or DoJ for that matter), as evidenced by the Consent Decree the FTC slapped on it last year.

Really? I'm not sure how landing a share of a $325million contract and an ongoing partnership with IBM fits into that. ARM servers/HPC also use PCI-E, and are also specced for Nvidia GPGPU deployment

Well, that's not going to happen unless AMD bring some IP of similar value to the table. Why would Intel give away IP to a competitor (and I'm talking about HSA here), and why would AMD opt for licensing Intel IP when PCI-E is not only free, it is also used by all other HSA Foundation founder members- Samsung, ARM, Mediatek, Texas Instruments, and of course Qualcomm, whose new server chip business supports PCI-E....and that's without AMD alienating it's own installed discrete graphics user base.

If you don't mind me saying so, that sounds like a completely convoluted and fucked up way to screw over a small IHV. If Intel were that completely mental about putting Nvidia out of business wouldn't they just buy it?

I think there's a disconnect here.

I'm looking at a total of three markets here. There's an emerging market, a market that Intel has a death grip on, and a market where there's some competition. Intel isn't stupid, so they'll focus development on the emerging market, and the technology built there will filter down into other markets. As the emerging market is HPC, that technology will be driving the bus over the next few years. As adoption costs money, we'll see the interconnect go from HPC to servers to consumer goods incrementally.


As such, let's figure this out. PCI-e 4.0 may well be featured heavily in both the consumer (Intel has mild competition) and server (Intel has a death grip) markets. These particular products are continually improved, but they're iterative improvements. It isn't a stretch to think that they'll have PCI-e 4.0 in the next generation, given that it's a minor improvement. While the consumer and server markets continue to improve, the vast majority of research and development is done on the HPC market. A market where money is less of an object, and where a unique new connection type isn't a liability, if better connection speeds can be delivered.

Intel develops a new interconnect for the HPC crowd, that offers substantially improved transfer rates. They allow AMD to license the interconnect so that they can demonstrate that the standard isn't anti-competitive. AMD has the standard, but they don't have the resources to compete in the HPC world. They're stuck just trying to right the ship with consumer hardware and server chips (Zen being their first chance next year). Intel has effectively produced a new interconnect standard in the market where their dominance is most challenged, demonstrated that they aren't utilizing anti-competitive practices, but have never actually opened themselves up for competition. AMD is currently a lame duck because the HPC market is just out of its reach.

By the time the new technologies filter down to consumer and server level hardware PCI-e 4.0 will have been around for a couple of years. Intel will have already utilized PCI-e as they pushed for, while already being out from the FTC's restrictions on including PCI-e. They'll be able to offer token PCI-e support, and actually focus on their own interconnect. It'll have taken at least a few years to filter to consumers, but the money Intel invested into research isn't going to be forgotten.


You seem to be looking at the next two years. I'll admit that the next couple of generations aren't likely to jettison PCI-e, and INtel will in fact embrace 4.0. What I'm worried about is 4-6 years down the line, once Intel has become invested heavily into the HPC market and they need to compete with Nvidia to capture more of it. They aren't stupid, so they'll do whatever it takes to destroy the competition, especially when they're the only game in town capable of offering a decent CPU. Once they've got a death grip on that market, the technology will just flow down hill from there. This isn't the paranoid delusion that this little development will change everything tomorrow, but that it is setting the ship upon a course that will hit a rock in the next few years. It is screwed up to say this will influence things soon, but it isn't unreasonable to say that Intel has a history of doing whatever it takes to secure market dominance. FTC and fair trade practices be damned.
 
Joined
Sep 7, 2011
Messages
2,785 (0.58/day)
Location
New Zealand
System Name MoneySink
Processor 2600K @ 4.8
Motherboard P8Z77-V
Cooling AC NexXxos XT45 360, RayStorm, D5T+XSPC tank, Tygon R-3603, Bitspower
Memory 16GB Crucial Ballistix DDR3-1600C8
Video Card(s) GTX 780 SLI (EVGA SC ACX + Giga GHz Ed.)
Storage Kingston HyperX SSD (128) OS, WD RE4 (1TB), RE2 (1TB), Cav. Black (2 x 500GB), Red (4TB)
Display(s) Achieva Shimian QH270-IPSMS (2560x1440) S-IPS
Case NZXT Switch 810
Audio Device(s) onboard Realtek yawn edition
Power Supply Seasonic X-1050
Software Win8.1 Pro
Benchmark Scores 3.5 litres of Pale Ale in 18 minutes.
You seem to be looking at the next two years. I'll admit that the next couple of generations aren't likely to jettison PCI-e, and INtel will in fact embrace 4.0. What I'm worried about is 4-6 years down the line, once Intel has become invested heavily into the HPC market and they need to compete with Nvidia to capture more of it.
Nvidia survives (and thrives) in the HPC/Server/WS market because of its pervasive software environment, not its hardware - it is also not a major player WRT revenue. Intel's biggest concern is that its largest competitors are voluntarily moving in their own direction. The HPC roadmap is already mapped out to 2020 (and a little beyond) as is fairly well known. Xeon will pair with FPGA's (hence the Altera acquisition) and Xeon Phi. IBM has also roadmapped FPGA and GPU (Tesla) with POWER9 and POWER10. To those two you can add hyperscale server/HPC clusters (Applied Micro X-Gene, Motorola Vulcan, Cavium Thunder-X, Qualcomm etc) which Intel has targeted with Xeon-D.
Intel could turn the whole system into a SoC or MCM ( processor+graphics/co-processor+ shared eDRAM + interconnect) and probably will, because sure as hell IBM/Mellanox/Nvidia will be looking at the same scenario. If you're talking about PCIE being removed from consumer motherboards, then yes, eventually that will be the case. Whether Nvidia (or any other add-in card vendor) survive will rely on strength of product. Most chip makers are moving towards embedded solutions - and in Nvidia's case also have a mezzanine module solution with Pascal, so that evolution is already in progress.
They aren't stupid, so they'll do whatever it takes to destroy the competition,
All I can say is good luck with that. ARM has an inherent advantage that Intel cannot match so far. X86 simply does not scale down far enough to match ARM in high volume consumer electronics, and Intel is too monolithic a company to react and counter an agile and pervasive licensed ecosystem. They are in exactly the same position IBM was in when licensing meant x86 became competitive enough to undermine their domination of the nascent PC market. Talking of IBM and your "4-6 years down the line", POWER9 (2017) won't even begin deployment until 3 years hence, with POWER10 slated for 2020-21 entry. Given that in non-GPU accelerated system, Intel's Xeon still lags behind IBM's BGQ and SPARC64 in computational effectiveness, Intel has some major competition.
On a purely co-processor point, Tesla continues to be easier to deploy, and have greater performance than Xeon Phi, which Intel counters by basically giving away Xeon Phi to capture market share (Intel apparently gifted Xeon Phi's for China's Tiahne-2)- although its performance per watt and workload challenges mean that vendors still look to Tesla (as the latest Green500 list attests). Note that the top system is using a PEZY-SC GPGPU that does not contain a graphics pipeline ( as I suspect future Tesla's will evolve).
Your argument revolves around Intel being able to change the environment by force of will. That will not happen unless Intel choose to walk a path separate from its competitors and ignore the requirements of vendors. Intel do not sell HPC systems. Intel provide hardware in form of interconnects and form factored components. A vendor that actually constructs, deploys, and maintains the system - such as Bull (Atos) for the sake of an example, still has to sell the right product for the job, which is why they sell Xeon powered S6000's to some customers, and IBM powered Escala's to others. How does Intel force both vendors and customers to turn away from competitors of equal (or far greater in some cases) financial muscle when their products are demonstrably inferior for certain workloads?
t is screwed up to say this will influence things soon, but it isn't unreasonable to say that Intel has a history of doing whatever it takes to secure market dominance. FTC and fair trade practices be damned.
Short memory. Remember the last time Intel tried to bend the industry to its will? How did Itanium work out?
Intel's dominance has been achieved through three avenues.
1. Forge a standard and allow that standard to become open (SSE, AVX, PCI, PCI-E etc) but ensure that their products are first to utilize the feature and become synonymous with its usage.
2. Use their base of IP and litigation to wage economic war on their competitors.
3. Limit competitors market opportunities by outspending them ( rebates, bribery)

None of those three apply to their competitors in enterprise computing.
1. You're talking about a proprietary standard (unless Intel hand it over to a special interest group). Intel's record is spotty to say the least. How many proprietary standards have forced the hand of an entire industry? Is Thunderbolt a roaring success?
2. Too many alliances, too many many big fish. Qualcomm isn't Cyrix, ARM isn't Seeq, IBM isn't AMD or Chips & Technologies. Intel's record of trying to enforce its will against large competitors? You remember Intel's complete back down to Microsoft over incorporating NSP in its processors? Intel's record against industry heavyweights isn't that which pervades the small pond of "x86 makers who aren't Intel"
3. Intel's $4.2 billion in losses in 2014 ( add to that the forecast of $3.4 billion in losses this year) through literally trying to buy x86 mobile market share indicate that their effectiveness outside of their core businesses founded 40 years ago, isn't that stellar. Like any business faced with overwhelming competition willing to cut profit to the bone (or even sustain losses for the sake of revenue) they bend to the greater force. Intel are just hoping that they are better equipped than the last time they got swamped ( Japanese DRAM manufacturing forcing Intel from the market).

You talk as if Intel is some all-consuming juggernaut. The reality is that Intel's position isn't as rock solid as you may think. It does rule the x86 market, but their slice of the consumer and enterprise revenue pie is far from assured. Intel can swagger all it likes in the PC market, but their acquisition of Altera and pursuit of Cray's interconnect business are indicators that they know they have a fight on their hands. I'm not prone to voicing absolutes unless they are already proven, but I would be near certain that Intel would not introduce a proprietary standard - licensed or not, if it decreased marketing opportunity - and Intel's co-processor market doesn't even begin to offset the marketing advantages of the third-party add-in board market.
***********************************************************************************************
You also might want to see the AMD license theory from a different perspective:

Say Intel develop a proprietary non-PCI-E standard and decide to license it to AMD to legitimize it as a default standard. What incentive is there for AMD to use it? Intel use the proprietary standard and cut out the entire add-in board market (including AMD's own graphics). If AMD have a creditable x86 platform, why wouldn't they retain PCI-E, have the entire add-in board market to themselves (including both major players in graphics and their HSA partners products), rather than fight Intel head-to-head in the marketplace with a new interface

Which option do you think would benefit AMD more? Which option would boost AMD's market share to the greater degree?
 
Last edited:
Joined
Apr 2, 2011
Messages
2,810 (0.56/day)
Nvidia survives (and thrives) in the HPC/Server/WS market because of its pervasive software environment, not its hardware - it is also not a major player WRT revenue. Intel's biggest concern is that its largest competitors are voluntarily moving in their own direction. The HPC roadmap is already mapped out to 2020 (and a little beyond) as is fairly well known. Xeon will pair with FPGA's (hence the Altera acquisition) and Xeon Phi. IBM has also roadmapped FPGA and GPU (Tesla) with POWER9 and POWER10. To those two you can add hyperscale server/HPC clusters (Applied Micro X-Gene, Motorola Vulcan, Cavium Thunder-X, Qualcomm etc) which Intel has targeted with Xeon-D.
Intel could turn the whole system into a SoC or MCM ( processor+graphics/co-processor+ shared eDRAM + interconnect) and probably will, because sure as hell IBM/Mellanox/Nvidia will be looking at the same scenario. If you're talking about PCIE being removed from consumer motherboards, then yes, eventually that will be the case. Whether Nvidia (or any other add-in card vendor) survive will rely on strength of product. Most chip makers are moving towards embedded solutions - and in Nvidia's case also have a mezzanine module solution with Pascal, so that evolution is already in progress.

All I can say is good luck with that. ARM has an inherent advantage that Intel cannot match so far. X86 simply does not scale down far enough to match ARM in high volume consumer electronics, and Intel is too monolithic a company to react and counter an agile and pervasive licensed ecosystem. They are in exactly the same position IBM was in when licensing meant x86 became competitive enough to undermine their domination of the nascent PC market. Talking of IBM and your "4-6 years down the line", POWER9 (2017) won't even begin deployment until 3 years hence, with POWER10 slated for 2020-21 entry. Given that in non-GPU accelerated system, Intel's Xeon still lags behind IBM's BGQ and SPARC64 in computational effectiveness, Intel has some major competition.
On a purely co-processor point, Tesla continues to be easier to deploy, and have greater performance than Xeon Phi, which Intel counters by basically giving away Xeon Phi to capture market share (Intel apparently gifted Xeon Phi's for China's Tiahne-2)- although its performance per watt and workload challenges mean that vendors still look to Tesla (as the latest Green500 list attests). Note that the top system is using a PEZY-SC GPGPU that does not contain a graphics pipeline ( as I suspect future Tesla's will evolve).
Your argument revolves around Intel being able to change the environment by force of will. That will not happen unless Intel choose to walk a path separate from its competitors and ignore the requirements of vendors. Intel do not sell HPC systems. Intel provide hardware in form of interconnects and form factored components. A vendor that actually constructs, deploys, and maintains the system - such as Bull (Atos) for the sake of an example, still has to sell the right product for the job, which is why they sell Xeon powered S6000's to some customers, and IBM powered Escala's to others. How does Intel force both vendors and customers to turn away from competitors of equal (or far greater in some cases) financial muscle when their products are demonstrably inferior for certain workloads?

Short memory. Remember the last time Intel tried to bend the industry to its will? How did Itanium work out?
Intel's dominance has been achieved through three avenues.
1. Forge a standard and allow that standard to become open (SSE, AVX, PCI, PCI-E etc) but ensure that their products are first to utilize the feature and become synonymous with its usage.
2. Use their base of IP and litigation to wage economic war on their competitors.
3. Limit competitors market opportunities by outspending them ( rebates, bribery)

None of those three apply to their competitors in enterprise computing.
1. You're talking about a proprietary standard (unless Intel hand it over to a special interest group). Intel's record is spotty to say the least. How many proprietary standards have forced the hand of an entire industry? Is Thunderbolt a roaring success?
2. Too many alliances, too many many big fish. Qualcomm isn't Cyrix, ARM isn't Seeq, IBM isn't AMD or Chips & Technologies. Intel's record of trying to enforce its will against large competitors? You remember Intel's complete back down to Microsoft over incorporating NSP in its processors? Intel's record against industry heavyweights isn't that which pervades the small pond of "x86 makers who aren't Intel"
3. Intel's $4.2 billion in losses in 2014 ( add to that the forecast of $3.4 billion in losses this year) through literally trying to buy x86 mobile market share indicate that their effectiveness outside of their core businesses founded 40 years ago, isn't that stellar. Like any business faced with overwhelming competition willing to cut profit to the bone (or even sustain losses for the sake of revenue) they bend to the greater force. Intel are just hoping that they are better equipped than the last time they got swamped ( Japanese DRAM manufacturing forcing Intel from the market).

You talk as if Intel is some all-consuming juggernaut. The reality is that Intel's position isn't as rock solid as you may think. It does rule the x86 market, but their slice of the consumer and enterprise revenue pie is far from assured. Intel can swagger all it likes in the PC market, but their acquisition of Altera and pursuit of Cray's interconnect business are indicators that they know they have a fight on their hands. I'm not prone to voicing absolutes unless they are already proven, but I would be near certain that Intel would not introduce a proprietary standard - licensed or not, if it decreased marketing opportunity - and Intel's co-processor market doesn't even begin to offset the marketing advantages of the third-party add-in board market.
***********************************************************************************************
You also might want to see the AMD license theory from a different perspective:

Say Intel develop a proprietary non-PCI-E standard and decide to license it to AMD to legitimize it as a default standard. What incentive is there for AMD to use it? Intel use the proprietary standard and cut out the entire add-in board market (including AMD's own graphics). If AMD have a creditable x86 platform, why wouldn't they retain PCI-E, have the entire add-in board market to themselves (including both major players in graphics and their HSA partners products), rather than fight Intel head-to-head in the marketplace with a new interface

Which option do you think would benefit AMD more? Which option would boost AMD's market share to the greater degree?

I can see your point, but there is some inconsistency.

First off, my memory is long enough. Their very first standard in modern computing was the x86 architecture. The foundation which their entire business is built upon today, correct? Yes, AMD pioneered x86-64, but Intel has been riding against RISC and its ilk for how many decades? Itanium, RDRAM, and their failure in the business field are functionally foot notes in a much larger campaign. They've managed to functionally annihilate AMD, despite AMD having had market dominance for a period of time. They've managed several fiascos (Itanium, netburst, the FTC, etc....), yet came away less crippled than Microsoft. I view them as very good at what they do, hulking to the point where any competition is unacceptable, and capable of undoing any errors by throwing enough cash and resources at them to completely remove their issues.

To your final proposition, please reread my original statement. I propose that developing a proprietary standard allows a lame duck competitor a leg up, prevents competition in an emerging market, and still meets FTC requirements. AMD benefits from making cards to the new interconnect standard because they can suddenly offer their products to an entirely new market. Intel isn't helping their CPU business here, they're allowing AMD an avenue by which to make their HPC capable GPUs immediately compatible with Intel's offerings. Intel effectively has AMD battle Nvidia for the HPC market, and while those two grind each other down they are able to mature their FPGA projects up to the point where HPC can be done on SOC options. They have their own interconnect, a company that's willing to fight their battle for them, and time. AMD is willing to get in on the fight because it's money. Simply redesigning the interconnect will allow them to reach a new market, bolstered by Intel's tacit support.

Once all of this leaves the HPC market, and filters down to consumer hardware, is what I'm less than happy about. ARM isn't a factor in the consumer space. It isn't a factor because none of our software is designed to take advantage of a monsterous number of cores. I don't see that changing in the next decade, because it would effectively mean billions, if not trillions, in money spent to completely rewrite code. As such, consumers will get some of the technologies of HPC, but only those which can be translated to x86-64. NVLink and the like won't translate anywhere except GPUs. A new interconnect on the other hand would translate fine. If Intel developed it in parallel to PCI-e 4.0 they would have a practical parachute, should they run into issues. Can you not see how this both embraces PCI-e, while preparing to eject it once their alternatives come to fruition?



After saying all this, I can assume part of your response. The HPC market is emerging, and Intel's architecture is holding it back. I get it, HPC is a bottomless pit for money where insane investments pay off. My problem is that Nvidia doesn't have the money Intel does. Intel is a lumbering giant, that never competes in a market, they seek dominance and control. I don't understand what makes the HPC market any different. This is why I think they'll pull something insane, and try to edge Nvidia out of the market. They've got a track record of developing new standards, and throwing money at something until it works. A new interconnect standard fits that bill exactly. While I see why a human would have misgivings about going down the same path, Intel isn't human.

If you'd like a more recent history lesson on Intel introducing their own standards, let's review QPI. If that doesn't float your boat, Intel is a part of the OIC which is standardizing interconnection for IoT devices. I'd also like to point out that Light Peak became Thunderbolt, and they moved Light Peak to MXC (which to my knowledge is in use for high cost systems: http://www.rosenberger.com/mxc/). Yes, Thunderbolt and Itanium were failures, but I'll only admit error if you can show me a company that's existed as long as Intel, yet never had a project failure.
 
Joined
Sep 7, 2011
Messages
2,785 (0.58/day)
Location
New Zealand
System Name MoneySink
Processor 2600K @ 4.8
Motherboard P8Z77-V
Cooling AC NexXxos XT45 360, RayStorm, D5T+XSPC tank, Tygon R-3603, Bitspower
Memory 16GB Crucial Ballistix DDR3-1600C8
Video Card(s) GTX 780 SLI (EVGA SC ACX + Giga GHz Ed.)
Storage Kingston HyperX SSD (128) OS, WD RE4 (1TB), RE2 (1TB), Cav. Black (2 x 500GB), Red (4TB)
Display(s) Achieva Shimian QH270-IPSMS (2560x1440) S-IPS
Case NZXT Switch 810
Audio Device(s) onboard Realtek yawn edition
Power Supply Seasonic X-1050
Software Win8.1 Pro
Benchmark Scores 3.5 litres of Pale Ale in 18 minutes.
Their very first standard in modern computing was the x86 architecture. The foundation which their entire business is built upon today, correct?
Nope. Intel was built on memory. Their EPROM and DRAM business allowed them to prosper. The 1103 chip built Intel thanks to how cheap it was in comparison magnetic-core memory. Microprocessors had low priority, especially from Gordon Moore and Robert Noyce ( rather than recapitulate their growth I would point you towards an article I wrote some time ago, and the reading list I supplied in the first post under the article - particularly the Tim Jackson and Bo Lojek books)
Yes, AMD pioneered x86-64, but Intel has been riding against RISC and its ilk for how many decades?
ARM wasn't the force it is now, and up until comparatively recently IBM and Intel had little overlap. The only other architecture of note came from DEC, who mismanaged themselves out of existence. If DEC had accepted Apple's offer to supply the latter with processors the landscape would very likely look a whole lot different - for one, AMD wouldn't have had the IP for K7 and K8 or HyperTransport.
None of what was exists now.
Itanium, RDRAM, and their failure in the business field are functionally foot notes in a much larger campaign.
Makes little difference. Intel (like most tech companies that become established) in their early stages innovated (even if their IP largely accrued from Fairchild Semi and cross licences with Texas Instruments, National Semi, and IBM). Mature companies rely more on purchasing IP. Which is exactly the model Intel have followed.
They've managed to functionally annihilate AMD, despite AMD having had market dominance for a period of time.
AMD are, and always have been, a bit part player. They literally owe their existence to Intel ( if it were not for Robert Noyce investing in AMD in 1969 they wouldn't have got anywhere close to their $1.55m incorporation target), and have been under Intel's boot since they signed their first contract to license Intel's 1702A EPROM in 1970. AMD have been indebted to Intel's IP their entire existence excluding their first few months where they manufactured licenced copies of Fairchild's TTL chips.
I view them as very good at what they do, hulking to the point where any competition is unacceptable, and capable of undoing any errors by throwing enough cash and resources at them to completely remove their issues.
Except that Itanium was never accepted by anyone except HP who were bound by contract to accept it.
Except StrongARM and XScale (ARMv4/v5) never became any sort of success
Except Intel's microcontrollers have been consistently dominated by Motorola and ARM

Basically Intel has been fine so long as it stayed with x86. Deviation from the core product has met with failure. The fact that Intel is precisely nowhere in the mobile market should be a strong indicator that that trend is continuing. Intel will continue to purchase IP to gain relevancy and will in all probability continue to lose money outside of its core businesses.
AMD benefits from making cards to the new interconnect standard because they can suddenly offer their products to an entirely new market.
...a market where Intel would still be the dominant player in a market where Intel has 98.3 - 98.5% market share....and unless AMD plans on isolating itself from its HSA partners, licenses also need to be granted to them.
Intel isn't helping their CPU business here, they're allowing AMD an avenue by which to make their HPC capable GPUs immediately compatible with Intel's offerings.
And why the hell would they do that? How can AMD compete with Intel giving away Xeon Phi co-processors? Nvidia survives because of the CUDA ecosystem. With AMD offering CUDA -porting tools and FirePro offering a superior FP64 to Tesla (with both being easier to code for and offering better performance per watt over Phi), all Intel would be doing is substituting one competitor with another - with the first competitor remaining viable anyway thanks to IBM and ARM.
Intel effectively has AMD battle Nvidia for the HPC market, and while those two grind each other down they are able to mature their FPGA projects up to the point where HPC can be done on SOC options.
Thats a gross oversimplification. Intel and Nvidia compete in ONE aspect of HPC - GPU accelerated clusters. Nvidia has no competition with Intel in some other areas ( notably cloud services where Nvidia have locked up both Microsoft and Amazon - the latter already having its 3rd generation Maxwell cards installed), while Intel's main revenue earner data centers don't use GPUs, and 396 of the top 500 supers don't use GPUs either.
They have their own interconnect, a company that's willing to fight their battle for them, and time. AMD is willing to get in on the fight because it's money.
Not really. It's more R&D and more time and effort spent qualifying hardware for a market that Intel will dominate from well before any contract is signed. Tell me this: When has Intel EVER allowed licensed use of their IP before Intel itself had assumed a dominant position with the same IP? (The answer is never). Your scenario postulates that AMD will move from one architecture where they are dominated by Intel to another architecture where they are dominated by Intel AND have to factor in Intel owning the specification. You have just made an argument that Intel will do anything to win, and yet you expect AMD to bite on Intel owned IP where revisions to the specification could be changed unilaterally by Intel. You do remember that AMD signed a long term deal for Intel processors with the 8085 and Intel promptly stiffed AMD on the 8086 forcing AMD to sign up with Zilog? or Intel granting AMD an x86 license then stiffing them on the 486?
I don't understand what makes the HPC market any different.
Intel owns the PC market. It doesn't own the enterprise sector.
In the PC space, Nvidia is dependent upon Wintel. In the enterprise sector it can pick and choose an architecture. Nvidia hardware sits equally well with IBM, ARM, or x86, and unlike consumer computing IBM particularly is a strong competitor and owns a solid market share. You don't understand what makes the HPC market any different, well the short answer is IBM isn't AMD, and enterprise customers are somewhat more discerning in their purchases than the average consumer....and as you've already pointed out ARM isn't an issue for Intel (or RISC in general for that matter) in PC's. The same obviously isn't true in enterprise.
If you'd like a more recent history lesson on Intel introducing their own standards, let's review QPI.
Proprietary tech. Used by Intel (basically a copy of DEC's EV6 and later AMD's HyperTransport). Not used by AMD. It's introduction led to Intel giving Nvidia $1.5 billion. Affected nothing other than removing Nvidia MCP chipsets thanks to the FSB stipulation in licensing-of little actual consequence since Nvidia was devoting less resources to them after the 680i. That about covers it I think.

You seem to be forecasting a doomsday scenario that is 1. probably a decade away, and 2. being prepared for now.
By the time PCI-E phases out, the industry will have moved on to embedded solutions and it will be a moot point.
Anyhow, I think I'm done here. My background is in big iron ( my first job after leaving school was coding for Honeywell and Burroughs mainframes), and I keep current even though I left the industry back in '92 (excepting the occasional article writing) , so I'm reasonably confident enough in my view. I guess we'll find out in due course how wrong, or right we were.
 
Last edited:
Joined
Apr 2, 2011
Messages
2,810 (0.56/day)
..
You seem to be forecasting a doomsday scenario that is 1. probably a decade away, and 2. being prepared for now.
By the time PCI-E phases out, the industry will have moved on to embedded solutions and it will be a moot point.
Anyhow, I think I'm done here. My background is in big iron ( my first job after leaving school was coding for Honeywell and Burroughs mainframes), and I keep current even though I left the industry back in '92 (excepting the occasional article writing) , so I'm reasonably confident enough in my view. I guess we'll find out in due course how wrong, or right we were.

I was reasonably certain that I made that clear here:

....
As such, let's figure this out. PCI-e 4.0 may well be featured heavily in both the consumer (Intel has mild competition) and server (Intel has a death grip) markets. These particular products are continually improved, but they're iterative improvements. It isn't a stretch to think that they'll have PCI-e 4.0 in the next generation, given that it's a minor improvement. While the consumer and server markets continue to improve, the vast majority of research and development is done on the HPC market. A market where money is less of an object, and where a unique new connection type isn't a liability, if better connection speeds can be delivered.
....
By the time the new technologies filter down to consumer and server level hardware PCI-e 4.0 will have been around for a couple of years. Intel will have already utilized PCI-e as they pushed for, while already being out from the FTC's restrictions on including PCI-e. They'll be able to offer token PCI-e support, and actually focus on their own interconnect. It'll have taken at least a few years to filter to consumers, but the money Intel invested into research isn't going to be forgotten.
...

If that wasn't clear, yes I was referring to the future. Not next year, not two years from now, more like 5-7 years at earliest. More likely 10+ years away.
 
Joined
Sep 7, 2011
Messages
2,785 (0.58/day)
Location
New Zealand
System Name MoneySink
Processor 2600K @ 4.8
Motherboard P8Z77-V
Cooling AC NexXxos XT45 360, RayStorm, D5T+XSPC tank, Tygon R-3603, Bitspower
Memory 16GB Crucial Ballistix DDR3-1600C8
Video Card(s) GTX 780 SLI (EVGA SC ACX + Giga GHz Ed.)
Storage Kingston HyperX SSD (128) OS, WD RE4 (1TB), RE2 (1TB), Cav. Black (2 x 500GB), Red (4TB)
Display(s) Achieva Shimian QH270-IPSMS (2560x1440) S-IPS
Case NZXT Switch 810
Audio Device(s) onboard Realtek yawn edition
Power Supply Seasonic X-1050
Software Win8.1 Pro
Benchmark Scores 3.5 litres of Pale Ale in 18 minutes.
If that wasn't clear, yes I was referring to the future. Not next year, not two years from now, more like 5-7 years at earliest. More likely 10+ years away.
A decade was always my stance (since Intel has made no secret of KNL+FPGA). The lack of clarity I think originates where the discussion points you've made have gone from no PCI-E 4.0 (which is slated for introduction within 2 years, and the point I originally found contentious) ...
For example, instead of introducing PCI-e 4.0, introduce PCE (Platform Connect Experimental)....
...to 4-6 years ( a 2-4 year PCI-E 4.0 platform cycle is way too short for HPC)
I'll admit that the next couple of generations aren't likely to jettison PCI-e, and INtel will in fact embrace 4.0. What I'm worried about is 4-6 years down the line

Ten years is my own estimate, so I have no issue with it. Architecture, logic integration, product cadence, and process node should all come together around that timeframe to allow most systems to be at MCM in nature (assuming the cadence for III-V semicon fabbing for 5nm/3nm remains on track), if not SoC.
To be honest, I saw the theory of Intel tossing PCI-E 4.0 somewhat inflammatory, alarmist, and so remote a possibility as to be impossible based on what needs to happen for that scenario to play out.
 
Last edited:
Joined
Jun 13, 2012
Messages
1,388 (0.31/day)
Processor i7-13700k
Motherboard Asus Tuf Gaming z790-plus
Cooling Coolermaster Hyper 212 RGB
Memory Corsair Vengeance RGB 32GB DDR5 7000mhz
Video Card(s) Asus Dual Geforce RTX 4070 Super ( 2800mhz @ 1.0volt, ~60mhz overlock -.1volts)
Storage 1x Samsung 980 Pro PCIe4 NVme, 2x Samsung 1tb 850evo SSD, 3x WD drives, 2 seagate
Display(s) Acer Predator XB273u 27inch IPS G-Sync 165hz
Power Supply Corsair RMx Series RM850x (OCZ Z series PSU retired after 13 years of service)
Mouse Logitech G502 hero
Keyboard Logitech G710+
Fury is roughly on par with Maxwell on power efficiency.
Interesting, who will have better process, GloFo 14nm or ITMS 16nm.
Samsung's 14nm were rumored to suck.
Um i don't think that is true, Think about how its compared 980ti is 250watt card and fury is 275. If you took draw of gddr5 outta the numbers and used what HBM draw's i would bet that "on par" would show its bit more of a gap. Now i don't know exact watts draw of gddr5 or HBM but likely would widen that gap since its claimed HBM needs less power.
 
Joined
Apr 19, 2011
Messages
2,198 (0.44/day)
Location
So. Cal.
straight from AMD themselves. In Short words, proprietary use of the protocol.
"DisplayPort Adaptive-Sync is an ingredient DisplayPort feature that enables real-time adjustment of monitor refresh rates."

That technology can be harnessed for anyone's hardware as long as they write the software/drives for the hardware. That is NOT a "proprietary use of the protocol", but it's required to use AMD FreeSync software/drives for use with their hardware. I see no problem.
 
Joined
Jul 9, 2015
Messages
3,413 (1.00/day)
System Name M3401 notebook
Processor 5600H
Motherboard NA
Memory 16GB
Video Card(s) 3050
Storage 500GB SSD
Display(s) 14" OLED screen of the laptop
Software Windows 10
Benchmark Scores 3050 scores good 15-20% lower than average, despite ASUS's claims that it has uber cooling.
Um i don't think that is true, Think about how its compared 980ti is 250watt card and fury is 275. If you took draw of gddr5 outta the numbers and used what HBM draw's i would bet that "on par" would show its bit more of a gap. Now i don't know exact watts draw of gddr5 or HBM but likely would widen that gap since its claimed HBM needs less power.
Fury has that power consuming water pipe thingy (on the other hand, being water cooled also reduces temp and hence power consumption, so one could argue that's even... But then you have air cooled Nano)

Heck, leaving Fury alone, even Tonga isn't far:



And 380x is significantly faster than 960.
 
Joined
Sep 7, 2011
Messages
2,785 (0.58/day)
Location
New Zealand
System Name MoneySink
Processor 2600K @ 4.8
Motherboard P8Z77-V
Cooling AC NexXxos XT45 360, RayStorm, D5T+XSPC tank, Tygon R-3603, Bitspower
Memory 16GB Crucial Ballistix DDR3-1600C8
Video Card(s) GTX 780 SLI (EVGA SC ACX + Giga GHz Ed.)
Storage Kingston HyperX SSD (128) OS, WD RE4 (1TB), RE2 (1TB), Cav. Black (2 x 500GB), Red (4TB)
Display(s) Achieva Shimian QH270-IPSMS (2560x1440) S-IPS
Case NZXT Switch 810
Audio Device(s) onboard Realtek yawn edition
Power Supply Seasonic X-1050
Software Win8.1 Pro
Benchmark Scores 3.5 litres of Pale Ale in 18 minutes.
Fury has that power consuming water pipe thingy
The Fury is air cooled. If you are referring to the Fury X and it's AIO water cooling, the pump for such kits uses less than 5W in most cases. Asetek's is specced for 3.1W
If you are comparing performance per watt, I'd suggest viewing this and then factor in the wattage draw difference in the power consumption charts. I don't think there are any published figures for HBM DDR vs GDDR5 IMC power draw, but AMD published some broad numbers just for the memory chips.


The Fury X has 512GB/s of bandwidth, so 512 / 35 = 14.6W
The GTX 980Ti has 336GB/s of bandwidth so 336 / 10.66 = 31.5W ( 16.9W more, which should be subtracted from the 980 Ti's power consumption for direct comparison)

I'd note that the 16.9W is probably closer to 30-40W overall including the differences in memory controller power requirement and real world usage, but accurate figures seem to be hard to come by. Hope this adds some clarification for you.
 

nem

Joined
Oct 22, 2013
Messages
165 (0.04/day)
Location
Cyberdyne CPU Sky Net
still waiting for hbm.. :p
 
Joined
Jul 9, 2015
Messages
3,413 (1.00/day)
System Name M3401 notebook
Processor 5600H
Motherboard NA
Memory 16GB
Video Card(s) 3050
Storage 500GB SSD
Display(s) 14" OLED screen of the laptop
Software Windows 10
Benchmark Scores 3050 scores good 15-20% lower than average, despite ASUS's claims that it has uber cooling.
Joined
Sep 7, 2011
Messages
2,785 (0.58/day)
Location
New Zealand
System Name MoneySink
Processor 2600K @ 4.8
Motherboard P8Z77-V
Cooling AC NexXxos XT45 360, RayStorm, D5T+XSPC tank, Tygon R-3603, Bitspower
Memory 16GB Crucial Ballistix DDR3-1600C8
Video Card(s) GTX 780 SLI (EVGA SC ACX + Giga GHz Ed.)
Storage Kingston HyperX SSD (128) OS, WD RE4 (1TB), RE2 (1TB), Cav. Black (2 x 500GB), Red (4TB)
Display(s) Achieva Shimian QH270-IPSMS (2560x1440) S-IPS
Case NZXT Switch 810
Audio Device(s) onboard Realtek yawn edition
Power Supply Seasonic X-1050
Software Win8.1 Pro
Benchmark Scores 3.5 litres of Pale Ale in 18 minutes.
What's your point? - posting something else aside from a slide would be helpful regarding a reply since you are quoting my post. The slide does nothing to contradict what I just posted. You can clearly see from the slide that HBM uses just under half the power of GDDR5 (and I'm talking about both the IC and overall system), which is something I've already worked out for you in my previous post. The only difference is the IMC power requirement - something I noted would be a factor but had no hard figures for, but the ballpark figures are well known ( I've used them on occasion myself), but since AMD isn't particularly forthcoming with their GDDR5 IMC power usage it is flawed to factor in Nvidia's numbers given their more aggressive memory frequency profiles.
Idling Fury X consumes 15w more than idling Fury Nano.
Why should that surprise anyone? Nano chips are binned for lower overall voltage, and the GPU idle voltage for Nano is lower than for the Fury X ( 0.9V vs 0.968V). W1zz also noted a 9W discrepancy at the time, so you aren't exactly breaking new ground.
 
Last edited:
Joined
Sep 22, 2012
Messages
1,010 (0.23/day)
Location
Belgrade, Serbia
System Name Intel® X99 Wellsburg
Processor Intel® Core™ i7-5820K - 4.5GHz
Motherboard ASUS Rampage V E10 (1801)
Cooling EK RGB Monoblock + EK XRES D5 Revo Glass PWM
Memory CMD16GX4M4A2666C15
Video Card(s) ASUS GTX1080Ti Poseidon
Storage Samsung 970 EVO PLUS 1TB /850 EVO 1TB / WD Black 2TB
Display(s) Samsung P2450H
Case Lian Li PC-O11 WXC
Audio Device(s) CREATIVE Sound Blaster ZxR
Power Supply EVGA 1200 P2 Platinum
Mouse Logitech G900 / SS QCK
Keyboard Deck 87 Francium Pro
Software Windows 10 Pro x64
When I think better Pascal GP100 will cost me as whole X99 platform, memory, GPU, CPU.
But graphic card is most important part.
 
Top