# NVIDIA Details "Pascal" Some More at GTC Japan



## btarunr (Nov 18, 2015)

NVIDIA revealed more details of its upcoming "Pascal" GPU architecture at the Japanese edition of the Graphics Technology Conference. The architecture will be designed to nearly double performance/Watt over the current "Maxwell" architecture, by implementing the latest tech. This begins with stacked HBM2 (high-bandwidth memory 2). The top "Pascal" based product will feature four 4-gigabyte HBM2 stacks, totaling 16 GB of memory. The combined memory bandwidth for the chip will be 1 TB/s. Internally, bandwidths can touch as high as 2 TB/s. The chip itself will support up to 32 GB of memory, and so enterprise variants (Quadro, Tesla), could max out the capacity. The consumer GeForce variant is expected to serve up 16 GB.

It's also becoming clear that NVIDIA will build its "Pascal" chips on the 16 nanometer FinFET process (AMD will build its next-gen chips on more advanced 14 nm process). NVIDIA is innovating a new interconnect called NVLink, which will change the way the company has been building dual-GPU graphics cards. Currently, dual-GPU cards are essentially two graphics cards on a common PCB, with PCIe bandwidth from the slot shared by a bridge-chip, and an internal SLI bridge connecting the two GPUs. With NVLink, the two GPUs will be interconnected with an 80 GB/s bi-directional data path, letting each GPU directly address memory controlled by the other. This should greatly improve memory management in games that take advantage of newer APIs such as DirectX 12 and Vulkan; and prime the graphics card for higher display resolutions. NVIDIA is expected to launch its first "Pascal" based products in the first half of 2016.





*View at TechPowerUp Main Site*


----------



## TheGuruStud (Nov 18, 2015)

Another $1k card, folks. Hell, with that much ram, maybe more.


----------



## Naito (Nov 18, 2015)

TheGuruStud said:


> Another $1k card, folks. Hell, with that much ram, maybe more.



Would buy.


----------



## Steevo (Nov 18, 2015)

With 16GB of RAM available to each GPU core what the hell are they planning on putting in the rest of the memory that it would need access to more?

We are at the point of needing 3GB of vmem with good memory management and 4GB for textures that may not be optomized, not 16 with piss poor management. 8GB would be enough with better timings to reduce latency.


----------



## RejZoR (Nov 18, 2015)

Keeping developers lazy since, I don't know, too long. The only good thing with consoles is that devs actually have to code games wel in order to make them look epic and run the same. But on PC, who cares, just smack extra 8GB of RAM and a graphic card with twice the memory and you can have a sloppy coded game that will run kinda fine because of that...


----------



## Naito (Nov 18, 2015)

RejZoR said:


> Keeping developers lazy since, I don't know, too long. The only good thing with consoles is that devs actually have to code games wel in order to make them look epic and run the same. But on PC, who cares, just smack extra 8GB of RAM and a graphic card with twice the memory and you can have a sloppy coded game that will run kinda fine because of that...



Do you have a reference to such an occurrence? Maybe it's just for the next wave of insane 4K optimized games? Or perhaps the 16GB will be reserved for that whole Titan X-style scenario and the more reasonably priced SKUs will have 8GB?


----------



## HumanSmoke (Nov 18, 2015)

Naito said:


> Do you have a reference to such an occurrence? Maybe it's just for the next wave of insane 4K optimized games? Or perhaps the 16GB will be reserved for that whole Titan X-style scenario and the more reasonably priced SKUs will have 8GB?


That's probably closer to the truth I think. Two HBM2 stacks for a performance/mainstream product would greatly reduce overall cost (final assembly).  The whole Pascal presentation seems to be an adjunct to the Supercomputing Conference (SC15) just held in Texas, so I'm figuring the thrust of the Pascal presentations are more toward Tesla than GeForce.


btarunr said:


> NVIDIA will build its "Pascal" chips on the 16 nanometer FinFET process (AMD will build its next-gen chips on more advanced 14 nm process).


Oh?
1. Can't say I've seen anything that actually backs up your assertion that 14LPP is more advanced than 16FF+/16FFC
2. I also haven't seen it confirmed anywhere that AMD will tap GloFo exclusively for GPUs. Somewhere in the last 7-8 weeks "could" has turned into "will". Some sources seem to think that GloFo will be tapped for lower end GPUs and Zen, with TSMC tasked with producing the larger GPUs.


----------



## ZoneDymo (Nov 18, 2015)

Its that NVlink crap that has me worried.
Instead of, oh idk, having a single damn gpu capable of running games at 4k @ 120hz, they seem to focus and plan on us getting 2...

I already find it massively bs with the current gen of cards, 600 - 1000 euro for a card that feels outdated right away....

AC Unity on a mere 1080p with a freaking 1000 dollar Titan X only manages 48 fps...
BF 4 , 4k, same 1000 dollar card...a mere 41 fps....

I could go on, if I am laying down that kind of cash for the latest and greatest gpu it better damn well run atleast current ffing games at the highest settings and then some.
Its ridiculous the ask you to put down that kind of money twice.
I mean Im not even talking about the more demanding more advanced games in the future, im talking about the here and now and those cards cannot do that?

I feel we should demand better as consumers tbh but yeah that focus on better SLI using that NVlink....not a good thing to focus on imo.


----------



## medi01 (Nov 18, 2015)

Naito said:


> Do you have a reference to such an occurrence?


The latest "The Longest Journey". (built on Unity engine, methinks)




HumanSmoke said:


> 1. Can't say I've seen anything that actually backs up your assertion that 14LPP is more advanced than 16FF+/16FFC



Actually, it is on the opposite as far as iThings go, some CPUs are manufactured on Samsung's 14nm, some on TSMC 16nm, the former consume more power.


----------



## TheDeeGee (Nov 18, 2015)

I'm ready!


----------



## HumanSmoke (Nov 18, 2015)

ZoneDymo said:


> Its that NVlink crap that has me worried.


Why? Are you planning on building a supercomputing cluster?


ZoneDymo said:


> I feel we should demand better as consumers tbh but yeah that focus on better SLI using that NVlink....not a good thing to focus on imo.


Sorry to bust up your indignation tirade, but since it isn't aimed at gaming SLI but workloads that are more intensive on system bus bandwidth - notably the Exascale computing initiative with IBM, Cray, and Mellanox, that really shouldn't be a problem.


----------



## ZoneDymo (Nov 18, 2015)

HumanSmoke said:


> Why? Are you planning on building a supercomputing cluster?
> 
> Sorry to bust up your indignation tirade, but since it isn't aimed at gaming SLI but workloads that are more intensive on system bus bandwidth - notably the Exascale computing initiative with IBM, Cray, and Mellanox, that really shouldn't be a problem.



If you think this will not find its way asap in gaming...
Its sorta how the internet originated in Military development, everything seems to start with Military or Space development and then finds its way to the consumer.
Sure this is first for industry but yeah will make its way to the gamers soon enough.

And even if it does not, statement still stands, hate this focus and "need" for dual gpu setups to get anything decent going.

also on a side note....dear gawd what a presentation...


----------



## deemon (Nov 18, 2015)

Did they fix async compute with the Pascal? (or was Pascal already designed and "taped out" when the scandal started?)

Did they fix the VR preemption problem? *Nvidia VR preemption "possibly catastrophic"*

Did they add freesync adaptive sync compatibility?


And hopefully we get 980Ti performance in 970mini/Nano/or_even_smaller form factor. 
And does anybody know yet Arctic Islands actual product availability (not paper) launch date/time?


----------



## deemon (Nov 18, 2015)

Steevo said:


> With 16GB of RAM available to each GPU core what the hell are they planning on putting in the rest of the memory that it would need access to more?
> 
> We are at the point of needing 3GB of vmem with good memory management and 4GB for textures that may not be optomized, not 16 with piss poor management. 8GB would be enough with better timings to reduce latency.



"No one _will_ need more than 637 kB of memory for a personal computer"


----------



## Easo (Nov 18, 2015)

1TB/sec?
Well, damn...


----------



## RejZoR (Nov 18, 2015)

deemon said:


> Did they fix async compute with the Pascal? (or was Pascal already designed and "taped out" when the scandal started?)
> 
> Did they fix the VR preemption problem? *Nvidia VR preemption "possibly catastrophic"*
> 
> ...



It's not a scandal. Maxwell 2 is capable of async compute with limited queue size with minimal performance drop. Once you exceede that queue size, performance starts dropping. Radeon cards have a much larger async queue and they usually don't hit the ceiling. Not much known if games really need queues beyond what Maxwell 2 can do...


----------



## arbiter (Nov 18, 2015)

deemon said:


> Did they fix async compute with the Pascal? (or was Pascal already designed and "taped out" when the scandal started?)
> 
> Did they add freesync compatibility?



Kinda hard to support a tech that was AMD locked for so many years that didn't get added to DX12 til last minute which was after maxwell 2 was final. Freesync is AMD locked software solution, (go read AMD's own FAQ's before you try to call me a fanboy) Adaptive sync is the standard, freesync uses that standard in a proprietary way. Old 7000 radeon cards can do adaptive sync but not freesync. Pascal being taped out doesn't mean its final chip, it is a prototype chip that very well can change and isn't final design, But nvidia does have 5 month gap. Hope that means not 5-6 month gap between amd and nvidia's chips but could less AMD cuts corners which they will have to and likely not gonna be a good idea.


----------



## medi01 (Nov 18, 2015)

arbiter said:


> Freesync is AMD locked software solution, (go read AMD's own FAQ's before you try to call me a fanboy)





arbiter said:


> Adaptive sync is the standard, freesync uses that standard in a proprietary way.



1) GSync is as locked down as it gets (to "nope, won't license it to anyone" point)
2) adaptive sync is THE ONLY standard, there is no "freesync" standard 
3) nothing stops any manufacturer out there to use adaptive sync (dp 1.2a), no need to involve AMD or any of its "freesync" stuff in there


----------



## TheinsanegamerN (Nov 18, 2015)

Naito said:


> Do you have a reference to such an occurrence? Maybe it's just for the next wave of insane 4K optimized games? Or perhaps the 16GB will be reserved for that whole Titan X-style scenario and the more reasonably priced SKUs will have 8GB?


Batman arkham knight and black ops III come to mind.


----------



## qubit (Nov 18, 2015)

I'm really looking forward to that unified memory architecture and the elimination of SLI problems.


----------



## dj-electric (Nov 18, 2015)

Arma and battlefield also love super high res textures.
Basically many non-single play , non "GPU tests" scenarious just like much more VRAM


----------



## Casecutter (Nov 18, 2015)

btarunr said:


> (AMD will build its next-gen chips on more advanced 14 nm process).


 


HumanSmoke said:


> Oh?
> 1. Can't say I've seen anything that actually backs up your assertion that 14LPP is more advanced than 16FF+/16FFC
> 2. I also haven't seen it confirmed anywhere that AMD will tap GloFo exclusively for GPUs. Somewhere in the last 7-8 weeks "could" has turned into "will". Some sources seem to think that GloFo will be tapped for lower end GPUs and Zen, with TSMC tasked with producing the larger GPUs.


 
Yea, btarunr came out of left-field with that snippet as soon as I read it it was WTF .
Thanks for that clean up.



btarunr said:


> NVIDIA is expected to launch its first "Pascal" based products in the first half of 2016.


Do we have this as confirmation?  While sure it could see starting delivery's for HPC customer initiatives first (Exascale/IBM, Cray, etc), then professional products (Tesla/Quadro), while GeForce use of the GP100 should be out a ways.


----------



## FreedomEclipse (Nov 18, 2015)

TheinsanegamerN said:


> Batman arkham knight and black ops III come to mind.



Its amazing how people were so hyped about BLOPS III - It was all over the internet and now its like its completely faded into obscurity. Nobody talks about it no more.


But then again Fallout 4 happened so even though Activision we're the first to get their game out. its Bethesda thats gettin all the pussy

:::EDIT:::

Oh, and not to forget about battlefront of course which has already been available in many countries apart from the UK. People are either playing one of the two games


----------



## HM_Actua1 (Nov 18, 2015)

Hell yah! bring it! Excited for Pascal!


----------



## deemon (Nov 18, 2015)

arbiter said:


> Kinda hard to support a tech that was AMD locked for so many years that didn't get added to DX12 til last minute which was after maxwell 2 was final. Freesync is AMD locked software solution, (go read AMD's own FAQ's before you try to call me a fanboy) Adaptive sync is the standard, freesync uses that standard in a proprietary way. Old 7000 radeon cards can do adaptive sync but not freesync. Pascal being taped out doesn't mean its final chip, it is a prototype chip that very well can change and isn't final design, But nvidia does have 5 month gap. Hope that means not 5-6 month gap between amd and nvidia's chips but could less AMD cuts corners which they will have to and likely not gonna be a good idea.





medi01 said:


> 1) GSync is as locked down as it gets (to "nope, won't license it to anyone" point)
> 2) adaptive sync is THE ONLY standard, there is no "freesync" standard
> 3) nothing stops any manufacturer out there to use adaptive sync (dp 1.2a), no need to involve AMD or any of its "freesync" stuff in there



so much misinformation.

Adaptive sync IS FreeSync.

*FreeSync* is the brand name for an adaptive synchronization technology for LCD displays that support a dynamic refresh rate aimed at reducing screen tearing.[2] FreeSync was initially developed by AMD in response to NVidia's G-Sync. FreeSync is royalty-free, free to use, and has no performance penalty.[3] As of 2015, VESA has adopted FreeSync as an optional component of the DisplayPort 1.2a specification.[4] FreeSync has a dynamic refresh rate range of 9–240 Hz.[3] As of August 2015, Intel also plan to support VESA's adaptive-sync with the next generation of GPU.[5]

https://en.wikipedia.org/wiki/FreeSync


----------



## AsRock (Nov 18, 2015)

MS says win 10 will allow mixed cards then nVidia come out with this, makes me wounder if they going nuke it and disable the crap out of it all over again.



> NVIDIA is innovating a new interconnect called NVLink, which will change the way the company has been building dual-GPU graphics cards.


----------



## Estaric (Nov 18, 2015)

Easo said:


> 1TB/sec?
> Well, damn...


i thought they same thing!


----------



## 荷兰大母猪 (Nov 18, 2015)

次世代GPU？3倍-5倍性能？本当？


----------



## Fluffmeister (Nov 18, 2015)

Hitman_Actual said:


> Hell yah! bring it! Excited for Pascal!



Ditto my friend, ditto! :O


----------



## FreedomEclipse (Nov 18, 2015)

荷兰大母猪 said:


> 次世代GPU？3倍-5倍性能？本当？



In English please?


----------



## nickbaldwin86 (Nov 18, 2015)

two please... I will email givemefreenvidia@nvidia.com my address to send them to.

Thanks
Nick


----------



## HM_Actua1 (Nov 18, 2015)

LET THE MILLENNIALS AND AMD FB RAGE BEGIN!

Pascal will smoke everything out there


----------



## dorsetknob (Nov 18, 2015)

荷兰大母猪 said:


> 次世代GPU？3倍-5倍性能？本当？





FreedomEclipse said:


> In English please?



rough translation
Next-Gen GPU? 3 times-5 times times performance? The simple necessities? 

    "" 3 times-5 times times performance?  "" Dream on


----------



## cadaveca (Nov 18, 2015)

qubit said:


> I'm really looking forward to that unified memory architecture and the elimination of SLI problems.


Unified memory will not alone fix SLI problems. Most issues are more about proper resource management than it is about not having shared memory, although post-processing will be a bit easier to manage if NV-Link does what it is purported to be able to do. The big boon of shared memory is the added addressing space as well as the ability to store more data allowing for greater detail.


----------



## qubit (Nov 18, 2015)

cadaveca said:


> Unified memory will not alone fix SLI problems. Most issues are more about proper resource management than it is about not having shared memory, although post-processing will be a bit easier to manage if NV-Link does what it is purported to be able to do. The big boon of shared memory is the added addressing space as well as the ability to store more data allowing for greater detail.


You might be right, I honestly dunno. I just remember that when this new form of SLI was announced several months ago by NVIDIA (they had a blog post that was reported widely by the tech press, including TPU) it sounded like all these problems would go away. Regardless, I'll bet it will be a big improvement over what we've got now.


----------



## Solidstate89 (Nov 18, 2015)

AsRock said:


> MS says win 10 will allow mixed cards then nVidia come out with this, makes me wounder if they going nuke it and disable the crap out of it all over again.


NVLink has nothing to do with the consumer space and I don't know why people keep assuming it does. It literally replaces the PCI-e standard and adds cost and complexity the system builders neither want nor need. Not to mention the CPU/PCH has to support the capability in order to communicate between the GPU and the CPU.

On top of it, the DX12 explicit multi-GPU mode has to be specifically coded for and enabled by game developers, the GPU vendors have very little to do in implementing it and the drivers have very little if nothing to do with optimizing it due to the low level nature of DX12.

The only option nVidia could possibly have at even approaching NVLink usage in the consumer space is in Dual-GPU cards with two GPU dies on a single PCB, using the NVLink as an interconnect devoted specifically to GPU-to-GPU communications.


----------



## HumanSmoke (Nov 18, 2015)

Solidstate89 said:


> NVLink has nothing to do with the consumer space and I don't know why people keep assuming it does. It literally replaces the PCI-e standard and adds cost and complexity the system builders neither want nor need. Not to mention the CPU/PCH has to support the capability in order to communicate between the GPU and the CPU.


That's about it. Just as Intel is pushing for PCI-E 4.0 and buying Cray's Aries/Gemini interconnect for pushing bandwidth in the big iron war with IBM, the latter has paired with Nvidia (NVLink) and Mellanox to do the exact same thing for OpenPOWER. The fixation some people have with everything tech HAVING to revolve around gaming is perplexing to say the least.


Solidstate89 said:


> The only option nVidia could possibly have at even approaching NVLink usage in the consumer space is in Dual-GPU cards with two GPU dies on a single PCB, using the NVLink as an interconnect devoted specifically to GPU-to-GPU communications.


That was my understanding also. The only way for Nvidia to get NVLink into the consumer space would be for it to be folded into the PCI-E 4.0 specification, or as an optional dedicated chip in the same way that Avago's PEX lane extender chips are currently used (and Nvidia's own old NF200 predecessor for that matter).


----------



## cadaveca (Nov 18, 2015)

HumanSmoke said:


> That was my understanding also. The only way for Nvidia to get NVLink into the consumer space would be for it to be folded into the PCI-E 4.0 specification, or as an optional dedicated chip in the same way that Avago's PEX lane extender chips are currently used (and Nvidia's own old NF200 predecessor for that matter).



NVLink should allow for direct access to system ram, and that function is already supported by PCIe spec, AFAIK. It's really no different than AMD's "sidebar" that was present on past GPU designs. IBM has already partnered with NVidia for NVLink, so I'm sure we'll see NVidia GPUs paired with PowerPC CPUs in short order.


----------



## HumanSmoke (Nov 18, 2015)

cadaveca said:


> NVLink should allow for direct access to system ram, and that function is already supported by PCIe spec


The function is, but the bandwidth isn't.
PCI-E bandwidth isn't an issue for consumer GPU in 99%+ situations - as W1zz's many PCIE 1.1/2.0/3.0 comparisons have shown. HPC bandwidth, both intra- and inter-nodal on the other hand....it isn't hard to see how a couple of CPUs feeding eight dual-GPU K80's or next-gen GPUs at 100% workload might produce some different effects regards bandwidth saturation compared to a gaming system.


cadaveca said:


> IBM has already partnered with NVidia for NVLink, so I'm sure we'll see NVidia GPUs paired with PowerPC CPUs in short order.


Next year for early access and test/qualification/validation. POWER9 (14nm) won't be ready for prime time until 2017, so the early systems will be based on the current POWER8


----------



## medi01 (Nov 18, 2015)

Fury is roughly on par with Maxwell on power efficiency.
Interesting, who will have better process, GloFo 14nm or ITMS 16nm.
Samsung's 14nm were rumored to suck.



deemon said:


> so much yadaydadayada



Try harder:

1) GSync is as locked down as it gets (to "nope, won't license it to anyone" point)
2) adaptive sync is THE ONLY *standard*, (*DISPLAYPORT 1.2A, THAT IS) *there is no "freesync" standard.
3) nothing stops any manufacturer out there to use adaptive sync (dp 1.2a), no need to involve AMD or any of its "freesync" stuff in there


----------



## cadaveca (Nov 18, 2015)

HumanSmoke said:


> it isn't hard to see how a couple of CPUs feeding eight dual-GPU K80's or next-gen GPUs at 100% workload might produce some different effects regards bandwidth saturation compared to a gaming system.


I've literally complained about a lack of bandwidth for multi-GPU processing for a long time, only to get things like "mining doesn't need bandwidth!" as responses. GPGPU has been limited by PCIe for the past 5-7 years from my perspective.


----------



## Darksword (Nov 18, 2015)

TheGuruStud said:


> Another $1k card, folks. Hell, with that much ram, maybe more.



$1,000.00?  HA!  This is Nvidia we're talking about.  

Try, $2,000.00 at least.


----------



## matar (Nov 18, 2015)

I have been waiting for this can't wait, my next build intel broadwell-E with X99 USB 3.1 and nVidia Pascal in SLi
I have skipped 6 and 7 and 9  and 28nm on the Maxwell didn't sell me , is good now its worth the upgrade. next November 2016 black Friday is my new shopping saving from now...


----------



## FreedomEclipse (Nov 18, 2015)

matar said:


> I have been waiting for this can't wait, my next build intel broadwell-E with X99 USB 3.1 and nVidia Pascal in SLi
> I have skipped 6 and 7 and 9  and 28nm on the Maxwell didn't sell me , is good now its worth the upgrade. next November 2016 black Friday is my new shopping saving from now...



So a $4000 computer then? Are you going to be F@lding or Crunching to the moon and back?


----------



## matar (Nov 18, 2015)

FreedomEclipse said:


> So a $4000 computer then? Are you going to be F@lding or Crunching to the moon and back?


broadwell-E and nVidia Pascal will be available in mid 2016 its not like they are out today and I am buying them next year


----------



## HumanSmoke (Nov 18, 2015)

cadaveca said:


> I've literally complained about a lack of bandwidth for multi-GPU processing for a long time, only to get things like "mining doesn't need bandwidth!" as responses. GPGPU has been limited by PCIe for the past 5-7 years from my perspective.


Sounds like the responses you've been getting aren't particularly well informed. I did note 99%+ of usage scenarios (current), but there are few people running 3 and 4 card setups, where the performance difference is more obvious...





...for HPC, I think latency is just as much an issue. Just as PCI-E 1.1/2.0 generally manifests as increased frame variance/stutter in comparison to 3.0 in bandwidth limiting scenarios, time to completion for GPGPU workloads is also affected by latency issues. Where time is literally money when selling time on a cluster its easy to see why Nvidia push the reduced latency of NVLink.


----------



## lilhasselhoffer (Nov 18, 2015)

Let's rip out the crap that AMD already said, as HBM is their baby.  That means the VRAM quantities aren't news.

What we're left with is NVLink.  It's interesting, if somewhat disturbing.

Right now single card dual GPU cards are don't scale great and cost a ton of money.  NVLink addresses...maybe the first issue.  The biggest issue is that even if it solves scaling, you've still got factor 2.  As this conclusion is self evident, we're back to the NVLink announcement not being about consumer GPUs.  The VRAM side definitely wasn't.

Is this good for HPC, absolutely.  Once you stop caring about price, the better the interconnect speed the more you can compute.  I applaud Nvidia announcing this for HPC, but it's standing against Intel.  Intel is buying up semi-conductor companies for their IP, and working with other companies in their field to corner the HPC market via common interconnects (PCI-e 4.0).  

The disturbing part is the upcoming war in which Intel decides to cut PCI-e lanes to make their PCI-e 4.0 standard required.  The consumer Intel offerings are already a little sparse on their PCI-e lanes.  I don't want Intel deciding to push less PCI-e lanes to penalize Nvidia for NVLink, which will also influence the AMD vs. Nvidia dynamic.



This is interesting, but not news for gamers.  Please, show me the Pascal variant with about 8 GB of VRAM that has 60-80% better performance than my current 7970 while sipping power.  Until then, thanks but I'm really not the target audience.


----------



## arbiter (Nov 18, 2015)

deemon said:


> so much misinformation.
> 
> Adaptive sync IS FreeSync.
> 
> *FreeSync* is the brand name for an adaptive synchronization technology for LCD displays that support a dynamic refresh rate aimed at reducing screen tearing.[2] FreeSync was initially developed by AMD in response to NVidia's G-Sync. FreeSync is royalty-free, free to use, and has no performance penalty.[3] As of 2015, VESA has adopted FreeSync as an optional component of the DisplayPort 1.2a specification.[4] FreeSync has a dynamic refresh rate range of 9–240 Hz.[3] As of August 2015, Intel also plan to support VESA's adaptive-sync with the next generation of GPU.[5]


Speaking of Misinformation, you quote "wikipedia".


> * How are DisplayPort Adaptive-Sync and AMD FreeSync™ technology different? *
> DisplayPort Adaptive-Sync is an ingredient DisplayPort feature that enables real-time adjustment of monitor refresh rates required by technologies like AMD FreeSync™ technology. *AMD FreeSync™ technology is a unique AMD hardware/software solution that utilizes DisplayPort Adaptive-Sync protocols to enable user-facing benefits*: smooth, tearing-free and low-latency gameplay and video. Users are encouraged to read this interview to learn more.


Source: http://support.amd.com/en-us/search/faq/214   <---- straight from AMD themselves. In Short words, proprietary use of the protocol




HumanSmoke said:


> The function is, but the bandwidth isn't.
> PCI-E bandwidth isn't an issue for consumer GPU in 99%+ situations - as W1zz's many PCIE 1.1/2.0/3.0 comparisons have shown. HPC bandwidth, both intra- and inter-nodal on the other hand....it isn't hard to see how a couple of CPUs feeding eight dual-GPU K80's or next-gen GPUs at 100% workload might produce some different effects regards bandwidth saturation compared to a gaming system.


Well NVlink will allow on a dual gpu card 1 gpu to access the memory of the other card as explained in the brief. Can't really do that with a pipe that so limited that PCI-E is atm. As resolution goes up could likely see benifit of that much higher bandwidth pipe in performance.


----------



## HumanSmoke (Nov 19, 2015)

lilhasselhoffer said:


> The disturbing part is the upcoming war in which Intel decides to cut PCI-e lanes to make their PCI-e 4.0 standard required.  The consumer Intel offerings are already a little sparse on their PCI-e lanes.  I don't want Intel deciding to push less PCI-e lanes to penalize Nvidia for NVLink, which will also influence the AMD vs. Nvidia dynamic.


Very unlikely to happen. Intel has been in the past threatened with sanction, and the FTC settlement (aside from being unable to substantially alter PCI-E for another year at least) only makes allowances for Intel's PCI-E electrical lane changes if it benefits their own CPUs - somewhat difficult to envisage as a scenario. Disabling PCI-E would require a justification that would suit both Intel, the FTC, and not incur anti-monopoly suits from add in board vendors (graphics, sound, SSD, RAID, ethernet, wi-fi, expansion options etc.)


> The second requirement is that Intel is not allowed to engage in any actions that limit the performance of the PCIe bus on the CPUs and chipsets, which would be a backdoor method of crippling AMD or NVIDIA’s GPUs’ performance. At first glance this would seem to require them to maintain status quo: x16 for GPUs on mainstream processors, and x1 for GPUs on Atom (much to the chagrin of NVIDIA no doubt). However Intel would be free to increase the number of available lanes on Atom if it suits their needs, and there’s also a clause for reducing PCIe performance. If Intel has a valid technological reason for a design change that reduces GPU performance and can prove in a real-world manner that this change benefits the performance of their CPUs, then they can go ahead with the design change. So while Intel is initially ordered to maintain the PCIe bus, they ultimately can make changes that hurt PCIe performance if it improves CPU performance.



Bear in mind that when the FTC made the judgement, PCI-E's relevance was expected to diminish, not be looking at a fourth generation. It's hard to make a case for Intel pulling the plug, or decreasing  PCI-E compatibility options when their own server/HPC future is tied to PCI-E 4.0 (and Omni-Path, which has no more relevance to consumer desktops than it's competitor, Mellanox's Infiniband) 


lilhasselhoffer said:


> This is interesting, but not news for gamers.  Please, show me the Pascal variant with about 8 GB of VRAM that has 60-80% better performance than my current 7970 while sipping power.  Until then, thanks but I'm really not the target audience.


Performance/Power might be a juggling act depending upon which target market the parts end up for, but Nvidia released numbers for Pascal at SC15. ~ 4 TFLOPs of double precision for the top SKU (presumably GP 100) which probably equates to a 1:3:6 ratio ( FP64:FP32:FP16), so about 12 TFLOPs of single precision.


Spoiler


----------



## lilhasselhoffer (Nov 19, 2015)

HumanSmoke said:


> Very unlikely to happen. Intel has been in the past threatened with sanction, and the FTC settlement (aside from being unable to substantially alter PCI-E for another year at least) only makes allowances for Intel's PCI-E electrical lane changes if it benefits their own CPUs - somewhat difficult to envisage as a scenario. Disabling PCI-E would require a justification that would suit both Intel, the FTC, and not incur anti-monopoly suits from add in board vendors (graphics, sound, SSD, RAID, ethernet, wi-fi, expansion options etc.)
> 
> 
> Bear in mind that when the FTC made the judgement, PCI-E's relevance was expected to diminish, not be looking at a fourth generation. It's hard to make a case for Intel pulling the plug, or decreasing  PCI-E compatibility options when their own server/HPC future is tied to PCI-E 4.0 (and Omni-Path, which has no more relevance to consumer desktops than it's competitor, Mellanox's Infiniband)
> ...



While I appreciate the fact check, disabling PCI-e wasn't what I was trying to say.  What I meant is developing a wholly new interface, and only offering a hand full of PCI-e interconnection.  They would effectively make its use possible, but not reasonable.  If they can demonstrate the ability to connect any card to their system via PCI-e bus it effectively means they're following the FTC's requirements to the letter of the law (if not the spirit).  Nowhere in the FTC's ruling can I find an indication of how many PCI-e lanes are required, only that they must be present and meet PCI-SIG electrical requirements.

For example, instead of introducing PCI-e 4.0, introduce PCE (Platform Connect Experimental).  10 PCE connections are allowed to directly connect to the CPU (not interchangeable with PCI-e), while a single PCI-e lane is connected to the CPU.  Intel still provides another 2 PCI-e lanes from the PCH, which don't exactly function as well for a GPU.

Intel decides to go whole hog with PCE, and cut Nvidia out of the HPC market.  They allow AMD to cross-license the interconnect (under their sharing agreement for x86), but set up some substantial fees for Nvidia.  In effect, Intel provides PCI-e as an option, but those who require interconnect have to forego Nvidia products.


As I read the ruling, this is technically not messing with PCI-e electrically.  It's also making the HPC effectively Intel's, because the high performance needs make PCI-e unusable (despite physically being present).  It follows along with the theory that PCI-e will be supplanted as well.  Have I missed something here?


----------



## HumanSmoke (Nov 19, 2015)

lilhasselhoffer said:


> While I appreciate the fact check, disabling PCI-e wasn't what I was trying to say.  What I meant is developing a wholly new interface, and only offering a hand full of PCI-e interconnection.


Well, Intel could theoretically turn their back on a specification they basically pushed for, but how does that not affect every vendor not just committed to PCI-E (since 4.0 like previous versions is backwards compatible), but every vendor already preparing PCI-E 4.0 logic ? ( Seems kind of crappy to have vendors showing PCI-E 4.0 logic at an *Intel Developers Forum* if they planned on shafting them).


lilhasselhoffer said:


> If they can demonstrate the ability to connect any card to their system via PCI-e bus it effectively means they're following the FTC's requirements to the letter of the law (if not the spirit).


The FTC's current mandate does not preclude further action (nor that of the EU or DoJ for that matter), as evidenced by the Consent Decree the FTC slapped on it last year.


lilhasselhoffer said:


> Intel decides to go whole hog with PCE, and cut Nvidia out of the HPC market.


Really? I'm not sure how landing a share of a $325million contract and an ongoing partnership with IBM fits into that. ARM servers/HPC also use PCI-E, and are also specced for Nvidia GPGPU deployment


lilhasselhoffer said:


> They allow AMD to cross-license the interconnect (under their sharing agreement for x86),


Well, that's not going to happen unless AMD bring some IP of similar value to the table. Why would Intel give away IP to a competitor (and I'm talking about HSA here), and why would AMD opt for licensing Intel IP when PCI-E is not only free, it is also used by all other HSA Foundation founder members- Samsung, ARM, Mediatek, Texas Instruments, and of course Qualcomm, whose new server chip business supports PCI-E....and that's without AMD alienating it's own installed discrete graphics user base.

If you don't mind me saying so, that sounds like a completely convoluted and fucked up way to screw over a small IHV. If Intel were that completely mental about putting Nvidia out of business wouldn't they just buy it?


----------



## rtwjunkie (Nov 19, 2015)

Hitman_Actual said:


> LET THE MILLENNIALS AND AMD FB RAGE BEGIN!
> 
> Pascal will smoke everything out there



I'm not sure I would bet on that yet. Arctic Islands has just as much potential to smoke Pascal at this point.

In reality, I feel we will have rough parity in performance, which will be a plus for the beleaguered AMD.


----------



## terroralpha (Nov 19, 2015)

btarunr said:


> It's also becoming clear that NVIDIA will build its "Pascal" chips on the 16 nanometer FinFET process (AMD will build its next-gen chips on more advanced 14 nm process). [/small]



oh look, more baseless BS. who says that the 14nm is more advanced? only intel and samjunk are making 14nm chips at this point so i'm assuming you are referring to the latter. you must not know about the iPhone SoC disaster... Apple sourced processors from TSMC and Samsung for the iPhone 6s. TSMC used 16nm FF and samsung used their shinny new 14nm process. samsung built SoCs are burning more power and getting hotter even though they are basically the same. 

if AMD is having samjunk build their chips then I'm DEFINITELY going with nvidia again.


----------



## lilhasselhoffer (Nov 19, 2015)

HumanSmoke said:


> Well, Intel could theoretically turn their back on a specification they basically pushed for, but how does that not affect every vendor not just committed to PCI-E (since 4.0 like previous versions is backwards compatible), but every vendor already preparing PCI-E 4.0 logic ? ( Seems kind of crappy to have vendors showing PCI-E 4.0 logic at an *Intel Developers Forum* if they planned on shafting them).
> 
> The FTC's current mandate does not preclude further action (nor that of the EU or DoJ for that matter), as evidenced by the Consent Decree the FTC slapped on it last year.
> 
> ...



I think there's a disconnect here.

I'm looking at a total of three markets here.  There's an emerging market, a market that Intel has a death grip on, and a market where there's some competition.  Intel isn't stupid, so they'll focus development on the emerging market, and the technology built there will filter down into other markets.  As the emerging market is HPC, that technology will be driving the bus over the next few years.  As adoption costs money, we'll see the interconnect go from HPC to servers to consumer goods incrementally.


As such, let's figure this out.  PCI-e 4.0 may well be featured heavily in both the consumer (Intel has mild competition) and server (Intel has a death grip) markets.  These particular products are continually improved, but they're iterative improvements.  It isn't a stretch to think that they'll have PCI-e 4.0 in the next generation, given that it's a minor improvement.  While the consumer and server markets continue to improve, the vast majority of research and development is done on the HPC market.  A market where money is less of an object, and where a unique new connection type isn't a liability, if better connection speeds can be delivered.

Intel develops a new interconnect for the HPC crowd, that offers substantially improved transfer rates.  They allow AMD to license the interconnect so that they can demonstrate that the standard isn't anti-competitive.  AMD has the standard, but they don't have the resources to compete in the HPC world.  They're stuck just trying to right the ship with consumer hardware and server chips (Zen being their first chance next year).  Intel has effectively produced a new interconnect standard in the market where their dominance is most challenged, demonstrated that they aren't utilizing anti-competitive practices, but have never actually opened themselves up for competition.  AMD is currently a lame duck because the HPC market is just out of its reach.

By the time the new technologies filter down to consumer and server level hardware PCI-e 4.0 will have been around for a couple of years.  Intel will have already utilized PCI-e as they pushed for, while already being out from the FTC's restrictions on including PCI-e.  They'll be able to offer token PCI-e support, and actually focus on their own interconnect.  It'll have taken at least a few years to filter to consumers, but the money Intel invested into research isn't going to be forgotten.


You seem to be looking at the next two years.  I'll admit that the next couple of generations aren't likely to jettison PCI-e, and INtel will in fact embrace 4.0.  What I'm worried about is 4-6 years down the line, once Intel has become invested heavily into the HPC market and they need to compete with Nvidia to capture more of it.  They aren't stupid, so they'll do whatever it takes to destroy the competition, especially when they're the only game in town capable of offering a decent CPU.  Once they've got a death grip on that market, the technology will just flow down hill from there.  This isn't the paranoid delusion that this little development will change everything tomorrow, but that it is setting the ship upon a course that will hit a rock in the next few years.  It is screwed up to say this will influence things soon, but it isn't unreasonable to say that Intel has a history of doing *whatever* it takes to secure market dominance.  FTC and fair trade practices be damned.


----------



## HumanSmoke (Nov 19, 2015)

lilhasselhoffer said:


> You seem to be looking at the next two years.  I'll admit that the next couple of generations aren't likely to jettison PCI-e, and INtel will in fact embrace 4.0.  What I'm worried about is 4-6 years down the line, once Intel has become invested heavily into the HPC market and they need to compete with Nvidia to capture more of it.


Nvidia survives (and thrives) in the HPC/Server/WS market because of its pervasive software environment, not its hardware - it is also not a major player WRT revenue. Intel's biggest concern is that its largest competitors are voluntarily moving in their own direction. The HPC roadmap is already mapped out to 2020 (and a little beyond) as is fairly well known. Xeon will pair with FPGA's (hence the Altera acquisition) and Xeon Phi. IBM has also roadmapped FPGA and GPU (Tesla) with POWER9 and POWER10. To those two you can add hyperscale server/HPC clusters (Applied Micro X-Gene, Motorola Vulcan, Cavium Thunder-X, Qualcomm etc) which Intel has targeted with Xeon-D.
Intel could turn the whole system into a SoC or MCM ( processor+graphics/co-processor+ shared eDRAM + interconnect) and probably will, because sure as hell IBM/Mellanox/Nvidia will be looking at the same scenario. If you're talking about PCIE being removed from consumer motherboards, then yes, eventually that will be the case. Whether Nvidia (or any other add-in card vendor) survive will rely on strength of product. Most chip makers are moving towards embedded solutions - and in Nvidia's case also have a mezzanine module solution with Pascal, so that evolution is already in progress.


lilhasselhoffer said:


> They aren't stupid, so they'll do whatever it takes to destroy the competition,


All I can say is good luck with that. ARM has an inherent advantage that Intel cannot match so far. X86 simply does not scale down far enough to match ARM in high volume consumer electronics, and Intel is too monolithic a company to react and counter an agile and pervasive licensed ecosystem. They are in exactly the same position IBM was in when licensing meant x86 became competitive enough to undermine their domination of the nascent PC market. Talking of IBM and your "4-6 years down the line", POWER9 (2017) won't even begin deployment until 3 years hence, with POWER10 slated for 2020-21 entry. Given that in non-GPU accelerated system, Intel's Xeon still lags behind IBM's BGQ and SPARC64 in computational effectiveness, Intel has some major competition.
On a purely co-processor point, Tesla continues to be easier to deploy, and have greater performance than Xeon Phi, which Intel counters by basically giving away Xeon Phi to capture market share (Intel apparently gifted Xeon Phi's for China's Tiahne-2)- although its performance per watt and workload challenges mean that vendors still look to Tesla (as the latest Green500 list attests). Note that the top system is using a PEZY-SC  GPGPU that does not contain a graphics pipeline ( as I suspect future Tesla's will evolve).
Your argument revolves around Intel being able to change the environment by force of will. That will not happen unless Intel choose to walk a path separate from its competitors and ignore the requirements of vendors. Intel do not sell HPC systems. Intel provide hardware in form of interconnects and form factored components. A vendor that actually constructs, deploys, and maintains the system - such as Bull (Atos) for the sake of an example, still has to sell the right product for the job, which is why they sell Xeon powered S6000's to some customers, and IBM powered Escala's to others. How does Intel force both vendors and customers to turn away from competitors of equal (or far greater in some cases) financial muscle when their products are demonstrably inferior for certain workloads?


lilhasselhoffer said:


> t is screwed up to say this will influence things soon, but it isn't unreasonable to say that Intel has a history of doing *whatever* it takes to secure market dominance.  FTC and fair trade practices be damned.


Short memory. Remember the last time Intel tried to bend the industry to its will? How did Itanium work out?
Intel's dominance has been achieved through three avenues.
1. Forge a standard and allow that standard to become open (SSE, AVX, PCI, PCI-E etc) but ensure that their products are first to utilize the feature and become synonymous with its usage.
2. Use their base of IP and litigation to wage economic war on their competitors.
3. Limit competitors market opportunities by outspending them ( rebates, bribery)

None of those three apply to their competitors in enterprise computing.
1. You're talking about a proprietary standard (unless Intel hand it over to a special interest group). Intel's record is spotty to say the least. How many proprietary standards have forced the hand of an entire industry? Is Thunderbolt a roaring success?
2. Too many alliances, too many many big fish. Qualcomm isn't Cyrix, ARM isn't Seeq, IBM isn't AMD or Chips & Technologies. Intel's record of trying to enforce its will against large competitors? You remember Intel's complete back down to Microsoft over incorporating NSP in its processors? Intel's record against industry heavyweights isn't that which pervades the small pond of "x86 makers who aren't Intel"
3. Intel's $4.2 billion in losses in 2014 ( add to that the forecast of $3.4 billion in losses this year) through literally trying to buy x86 mobile market share indicate that their effectiveness outside of their core businesses founded 40 years ago, isn't that stellar. Like any business faced with overwhelming competition willing to cut profit to the bone (or even sustain losses for the sake of revenue) they bend to the greater force. Intel are just hoping that they are better equipped than the last time they got swamped ( Japanese DRAM manufacturing forcing Intel from the market).

You talk as if Intel is some all-consuming juggernaut. The reality is that Intel's position isn't as rock solid as you may think. It does rule the x86 market, but their slice of the consumer and enterprise revenue pie is far from assured. Intel can swagger all it likes in the PC market, but their acquisition of Altera and pursuit of Cray's interconnect business are indicators that they know they have a fight on their hands. I'm not prone to voicing absolutes unless they are already proven, but I would be near certain that Intel would not introduce a proprietary standard - licensed or not, if it decreased marketing opportunity - and Intel's co-processor market doesn't even begin to offset the marketing advantages of the third-party add-in board market. 
***********************************************************************************************
You also might want to see the AMD license theory from a different perspective:

Say Intel develop a proprietary non-PCI-E standard and decide to license it to AMD to legitimize it as a default standard. What incentive is there for AMD to use it? Intel use the proprietary standard and cut out the entire add-in board market (including AMD's own graphics). If AMD have a creditable x86 platform, why wouldn't they retain PCI-E, have the entire add-in board market to themselves (including both major players in graphics and their HSA partners products), rather than fight Intel head-to-head in the marketplace with a new interface

*Which option do you think would benefit AMD more? Which option would boost AMD's market share to the greater degree?*


----------



## lilhasselhoffer (Nov 19, 2015)

HumanSmoke said:


> Nvidia survives (and thrives) in the HPC/Server/WS market because of its pervasive software environment, not its hardware - it is also not a major player WRT revenue. Intel's biggest concern is that its largest competitors are voluntarily moving in their own direction. The HPC roadmap is already mapped out to 2020 (and a little beyond) as is fairly well known. Xeon will pair with FPGA's (hence the Altera acquisition) and Xeon Phi. IBM has also roadmapped FPGA and GPU (Tesla) with POWER9 and POWER10. To those two you can add hyperscale server/HPC clusters (Applied Micro X-Gene, Motorola Vulcan, Cavium Thunder-X, Qualcomm etc) which Intel has targeted with Xeon-D.
> Intel could turn the whole system into a SoC or MCM ( processor+graphics/co-processor+ shared eDRAM + interconnect) and probably will, because sure as hell IBM/Mellanox/Nvidia will be looking at the same scenario. If you're talking about PCIE being removed from consumer motherboards, then yes, eventually that will be the case. Whether Nvidia (or any other add-in card vendor) survive will rely on strength of product. Most chip makers are moving towards embedded solutions - and in Nvidia's case also have a mezzanine module solution with Pascal, so that evolution is already in progress.
> 
> All I can say is good luck with that. ARM has an inherent advantage that Intel cannot match so far. X86 simply does not scale down far enough to match ARM in high volume consumer electronics, and Intel is too monolithic a company to react and counter an agile and pervasive licensed ecosystem. They are in exactly the same position IBM was in when licensing meant x86 became competitive enough to undermine their domination of the nascent PC market. Talking of IBM and your "4-6 years down the line", POWER9 (2017) won't even begin deployment until 3 years hence, with POWER10 slated for 2020-21 entry. Given that in non-GPU accelerated system, Intel's Xeon still lags behind IBM's BGQ and SPARC64 in computational effectiveness, Intel has some major competition.
> ...



I can see your point, but there is some inconsistency.

First off, my memory is long enough.  Their very first standard in modern computing was the x86 architecture.  The foundation which their entire business is built upon today, correct?  Yes, AMD pioneered x86-64, but Intel has been riding against RISC and its ilk for how many decades?  Itanium, RDRAM, and their failure in the business field are functionally foot notes in a much larger campaign.  They've managed to functionally annihilate AMD, despite AMD having had market dominance for a period of time.  They've managed several fiascos (Itanium, netburst, the FTC, etc....), yet came away less crippled than Microsoft.  I view them as very good at what they do, hulking to the point where any competition is unacceptable, and capable of undoing any errors by throwing enough cash and resources at them to completely remove their issues.

To your final proposition, please reread my original statement.  I propose that developing a proprietary standard allows a lame duck competitor a leg up, prevents competition in an emerging market, and still meets FTC requirements.  AMD benefits from making cards to the new interconnect standard because they can suddenly offer their products to an entirely new market.  Intel isn't helping their CPU business here, they're allowing AMD an avenue by which to make their HPC capable GPUs immediately compatible with Intel's offerings.  Intel effectively has AMD battle Nvidia for the HPC market, and while those two grind each other down they are able to mature their FPGA projects up to the point where HPC can be done on SOC options.  They have their own interconnect, a company that's willing to fight their battle for them, and time.  AMD is willing to get in on the fight because it's money.  Simply redesigning the interconnect will allow them to reach a new market, bolstered by Intel's tacit support.

Once all of this leaves the HPC market, and filters down to consumer hardware, is what I'm less than happy about.  ARM isn't a factor in the consumer space.  It isn't a factor because none of our software is designed to take advantage of a monsterous number of cores.  I don't see that changing in the next decade, because it would effectively mean billions, if not trillions, in money spent to completely rewrite code.  As such, consumers will get some of the technologies of HPC, but only those which can be translated to x86-64.  NVLink and the like won't translate anywhere except GPUs.  A new interconnect on the other hand would translate fine.  If Intel developed it in parallel to PCI-e 4.0 they would have a practical parachute, should they run into issues.  Can you not see how this both embraces PCI-e, while preparing to eject it once their alternatives come to fruition?



After saying all this, I can assume part of your response.  The HPC market is emerging, and Intel's architecture is holding it back.  I get it, HPC is a bottomless pit for money where insane investments pay off.  My problem is that Nvidia doesn't have the money Intel does.  Intel is a lumbering giant, that never competes in a market, they seek dominance and control.  I don't understand what makes the HPC market any different.  This is why I think they'll pull something insane, and try to edge Nvidia out of the market.  They've got a track record of developing new standards, and throwing money at something until it works.  A new interconnect standard fits that bill exactly.  While I see why a human would have misgivings about going down the same path, Intel isn't human.

If you'd like a more recent history lesson on Intel introducing their own standards, let's review QPI.  If that doesn't float your boat, Intel is a part of the OIC which is standardizing interconnection for IoT devices.  I'd also like to point out that Light Peak became Thunderbolt, and they moved Light Peak to MXC (which to my knowledge is in use for high cost systems: http://www.rosenberger.com/mxc/).  Yes, Thunderbolt and Itanium were failures, but I'll only admit error if you can show me a company that's existed as long as Intel, yet never had a project failure.


----------



## HumanSmoke (Nov 20, 2015)

lilhasselhoffer said:


> Their very first standard in modern computing was the x86 architecture.  The foundation which their entire business is built upon today, correct?


Nope. Intel was built on memory. Their EPROM and DRAM business allowed them to prosper. The 1103 chip built Intel thanks to how cheap it was in comparison magnetic-core memory. Microprocessors had low priority, especially from Gordon Moore and Robert Noyce ( rather than recapitulate their growth I would point you towards an article I wrote some time ago, and the reading list I supplied in the first post under the article - particularly the Tim Jackson and Bo Lojek books)


lilhasselhoffer said:


> Yes, AMD pioneered x86-64, but Intel has been riding against RISC and its ilk for how many decades?


ARM wasn't the force it is now, and up until comparatively recently IBM and Intel had little overlap. The only other architecture of note came from DEC, who mismanaged themselves out of existence. If DEC had accepted Apple's offer to supply the latter with processors the landscape would very likely look a whole lot different - for one, AMD wouldn't have had the IP for K7 and K8 or HyperTransport.
None of what was exists now.


lilhasselhoffer said:


> Itanium, RDRAM, and their failure in the business field are functionally foot notes in a much larger campaign.


Makes little difference. Intel (like most tech companies that become established) in their early stages innovated (even if their IP largely accrued from Fairchild Semi and cross licences with Texas Instruments, National Semi, and IBM). Mature companies rely more on purchasing IP. Which is exactly the model Intel have followed.


lilhasselhoffer said:


> They've managed to functionally annihilate AMD, despite AMD having had market dominance for a period of time.


AMD are, and always have been, a bit part player. They literally owe their existence to Intel ( if it were not for Robert Noyce investing in AMD in 1969 they wouldn't have got anywhere close to their $1.55m incorporation target), and have been under Intel's boot since they signed their first contract to license Intel's 1702A EPROM in 1970. AMD have been indebted to Intel's IP their entire existence excluding their first few months where they manufactured licenced copies of Fairchild's TTL chips.


lilhasselhoffer said:


> I view them as very good at what they do, hulking to the point where any competition is unacceptable, and capable of undoing any errors by throwing enough cash and resources at them to completely remove their issues.


Except that Itanium was never accepted by anyone except HP who were bound by contract to accept it.
Except StrongARM and XScale (ARMv4/v5) never became any sort of success
Except Intel's microcontrollers have been consistently dominated by Motorola and ARM

Basically Intel has been fine so long as it stayed with x86. Deviation from the core product has met with failure. The fact that Intel is precisely nowhere in the mobile market should be a strong indicator that that trend is continuing. Intel will continue to purchase IP to gain relevancy and will in all probability continue to lose money outside of its core businesses.


lilhasselhoffer said:


> AMD benefits from making cards to the new interconnect standard because they can suddenly offer their products to an entirely new market.


...a market where Intel would still be the dominant player in a market where Intel has 98.3 - 98.5% market share....and unless AMD plans on isolating itself from its HSA partners, licenses also need to be granted to them.


lilhasselhoffer said:


> Intel isn't helping their CPU business here, they're allowing AMD an avenue by which to make their HPC capable GPUs immediately compatible with Intel's offerings.


And why the hell would they do that? How can AMD compete with Intel giving away Xeon Phi co-processors? Nvidia survives because of the CUDA ecosystem. With AMD offering CUDA -porting tools and FirePro offering a superior FP64 to Tesla (with both being easier to code for and offering better performance per watt over Phi), all Intel would be doing is substituting one competitor with another - with the first competitor remaining viable anyway thanks to IBM and ARM.


lilhasselhoffer said:


> Intel effectively has AMD battle Nvidia for the HPC market, and while those two grind each other down they are able to mature their FPGA projects up to the point where HPC can be done on SOC options.


Thats a gross oversimplification. Intel and Nvidia compete in ONE aspect of HPC - GPU accelerated clusters. Nvidia has no competition with Intel in some other areas ( notably cloud services where Nvidia have locked up both Microsoft and Amazon - the latter already having its 3rd generation Maxwell cards installed), while Intel's main revenue earner data centers don't use GPUs, and 396 of the top 500 supers don't use GPUs either.


lilhasselhoffer said:


> They have their own interconnect, a company that's willing to fight their battle for them, and time.  AMD is willing to get in on the fight because it's money.


Not really. It's more R&D and more time and effort spent qualifying hardware for a market that Intel will dominate from well before any contract is signed. Tell me this: When has Intel EVER allowed licensed use of their IP before Intel itself had assumed a dominant position with the same IP? (The answer is never). Your scenario postulates that AMD will move from one architecture where they are dominated by Intel to another architecture where they are dominated by Intel AND have to factor in Intel owning the specification. You have just made an argument that Intel will do anything to win, and yet you expect AMD to bite on Intel owned IP where revisions to the specification could be changed unilaterally by Intel. You do remember that AMD signed a long term deal for Intel processors with the 8085 and Intel promptly stiffed AMD on the 8086 forcing AMD to sign up with Zilog? or Intel granting AMD an x86 license then stiffing them on the 486?


lilhasselhoffer said:


> I don't understand what makes the HPC market any different.


Intel owns the PC market. It doesn't own the enterprise sector.
In the PC space, Nvidia is dependent upon Wintel. In the enterprise sector it can pick and choose an architecture. Nvidia hardware sits equally well with IBM, ARM, or x86, and unlike consumer computing IBM particularly is a strong competitor and owns a solid market share. You don't understand what makes the HPC market any different, well the short answer is IBM isn't AMD, and enterprise customers are somewhat more discerning in their purchases than the average consumer....and as you've already pointed out ARM isn't an issue for Intel (or RISC in general for that matter) in PC's. The same obviously isn't true in enterprise.


lilhasselhoffer said:


> If you'd like a more recent history lesson on Intel introducing their own standards, let's review QPI.


Proprietary tech. Used by Intel (basically a copy of DEC's EV6 and later AMD's HyperTransport). Not used by AMD. It's introduction led to Intel giving Nvidia $1.5 billion. Affected nothing other than removing Nvidia MCP chipsets thanks to the FSB stipulation in licensing-of little actual consequence since Nvidia was devoting less resources to them after the 680i. That about covers it I think.

You seem to be forecasting a doomsday scenario that is 1. probably a decade away, and 2. being prepared for now.
By the time PCI-E phases out, the industry will have moved on to embedded solutions and it will be a moot point.
Anyhow, I think I'm done here. My background is in big iron ( my first job after leaving school was coding for Honeywell and Burroughs mainframes), and I keep current even though I left the industry back in '92 (excepting the occasional article writing) , so I'm reasonably confident enough in my view. I guess we'll find out in due course how wrong, or right we were.


----------



## lilhasselhoffer (Nov 20, 2015)

HumanSmoke said:


> ..
> You seem to be forecasting a doomsday scenario that is 1. probably a decade away, and 2. being prepared for now.
> By the time PCI-E phases out, the industry will have moved on to embedded solutions and it will be a moot point.
> Anyhow, I think I'm done here. My background is in big iron ( my first job after leaving school was coding for Honeywell and Burroughs mainframes), and I keep current even though I left the industry back in '92 (excepting the occasional article writing) , so I'm reasonably confident enough in my view. I guess we'll find out in due course how wrong, or right we were.



I was reasonably certain that I made that clear here:



lilhasselhoffer said:


> ....
> As such, let's figure this out.  PCI-e 4.0 may well be featured heavily in both the consumer (Intel has mild competition) and server (Intel has a death grip) markets.  These particular products are continually improved, but they're iterative improvements.  It isn't a stretch to think that* they'll have PCI-e 4.0 in the next generation*, given that it's a minor improvement.  While the consumer and server markets continue to improve, the vast majority of research and development is done on the HPC market.  A market where money is less of an object, and where a unique new connection type isn't a liability, if better connection speeds can be delivered.
> ....
> *By the time the new technologies filter down* to consumer and server level hardware PCI-e 4.0 will have been around for a couple of years.  Intel will have already utilized PCI-e as they pushed for, while already being out from the FTC's restrictions on including PCI-e.  They'll be able to offer token PCI-e support, and actually focus on their own interconnect.  It'll have taken at least a few years to filter to consumers, but the money Intel invested into research isn't going to be forgotten.
> ...



If that wasn't clear, yes I was referring to the future.  Not next year, not two years from now, more like 5-7 years at earliest.  More likely 10+ years away.


----------



## HumanSmoke (Nov 20, 2015)

lilhasselhoffer said:


> If that wasn't clear, yes I was referring to the future.  Not next year, not two years from now, more like 5-7 years at earliest.  More likely 10+ years away.


A decade was always my stance (since Intel has made no secret of KNL+FPGA). The lack of clarity I think originates where the discussion points you've made have gone from no PCI-E 4.0 (which is slated for introduction within 2 years, and the point I originally found contentious) ...


lilhasselhoffer said:


> For example,* instead of introducing PCI-e 4.0, introduce PCE* (Platform Connect Experimental)....


...to 4-6 years ( a 2-4 year PCI-E 4.0 platform cycle is way too short for HPC)


lilhasselhoffer said:


> I'll admit that the next couple of generations aren't likely to jettison PCI-e, and INtel will in fact embrace 4.0.  *What I'm worried about is 4-6 years down the line*



Ten years is my own estimate, so I have no issue with it. Architecture, logic integration, product cadence, and process node should all come together around that timeframe to allow most systems to be at MCM in nature (assuming the cadence for III-V semicon fabbing for 5nm/3nm remains on track), if not SoC. 
To be honest, I saw the theory of Intel tossing PCI-E 4.0 somewhat inflammatory, alarmist, and so remote a possibility as to be impossible based on what needs to happen for that scenario to play out.


----------



## arbiter (Nov 20, 2015)

medi01 said:


> Fury is roughly on par with Maxwell on power efficiency.
> Interesting, who will have better process, GloFo 14nm or ITMS 16nm.
> Samsung's 14nm were rumored to suck.


Um i don't think that is true, Think about how its compared 980ti is 250watt card and fury is 275. If you took draw of gddr5 outta the numbers and used what HBM draw's i would bet that "on par" would show its bit more of a gap. Now i don't know exact watts draw of gddr5 or HBM but likely would widen that gap since its claimed HBM needs less power.


----------



## Casecutter (Nov 20, 2015)

arbiter said:


> straight from AMD themselves. In Short words, proprietary use of the protocol.


"DisplayPort Adaptive-Sync is an ingredient DisplayPort feature that enables real-time adjustment of monitor refresh rates."

That technology can be harnessed for anyone's hardware as long as they write the software/drives for the hardware. That is NOT a "proprietary use of the protocol", but it's required to use AMD *FreeSync* software/drives for use with their hardware.  I see no problem.


----------



## medi01 (Nov 20, 2015)

arbiter said:


> Um i don't think that is true, Think about how its compared 980ti is 250watt card and fury is 275. If you took draw of gddr5 outta the numbers and used what HBM draw's i would bet that "on par" would show its bit more of a gap. Now i don't know exact watts draw of gddr5 or HBM but likely would widen that gap since its claimed HBM needs less power.


Fury has that power consuming water pipe thingy (on the other hand, being water cooled also reduces temp and hence power consumption, so one could argue that's even... But then you have air cooled Nano)

Heck, leaving Fury alone, even Tonga isn't far:







And 380x is significantly faster than 960.


----------



## HumanSmoke (Nov 20, 2015)

medi01 said:


> Fury has that power consuming water pipe thingy


The Fury is air cooled. If you are referring to the Fury X and it's AIO water cooling, the pump for such kits uses less than 5W in most cases. Asetek's is specced for 3.1W
If you are comparing performance per watt, I'd suggest viewing this and then factor in the wattage draw difference in the power consumption charts. I don't think there are any published figures for HBM DDR vs GDDR5 IMC power draw, but AMD published some broad numbers just for the memory chips.



Spoiler











The Fury X has 512GB/s of bandwidth, so 512 / 35 = 14.6W
The GTX 980Ti has 336GB/s of bandwidth so 336 / 10.66 = 31.5W ( 16.9W more, which should be subtracted from the 980 Ti's power consumption for direct comparison)

I'd note that the 16.9W is probably closer to 30-40W overall including the differences in memory controller power requirement and real world usage, but accurate figures seem to be hard to come by. Hope this adds some clarification for you.


----------



## nem (Nov 25, 2015)

still waiting for hbm..


----------



## deemon (Nov 25, 2015)

HumanSmoke said:


> Spoiler





Spoiler


----------



## medi01 (Nov 25, 2015)

HumanSmoke said:


> The Fury is air cooled. If you are referring to the Fury X and it's AIO water cooling, the pump for such kits uses less than 5W in most cases. Asetek's is specced for 3.1W



3W is on par with 200mm air coolers
*Power consumption* 3,36W
link

Idling Fury X consumes 15w more than idling Fury Nano.
hardcop


----------



## HumanSmoke (Nov 25, 2015)

deemon said:


> Spoiler


What's your point? - posting something else aside from a slide would be helpful regarding a reply since you are quoting my post. The slide does nothing to contradict what I just posted. You can clearly see from the slide that HBM uses just under half the power of GDDR5 (and I'm talking about both the IC and overall system), which is something I've already worked out for you in my previous post. The only difference is the IMC power requirement - something I noted would be a factor but had no hard figures for, but the ballpark figures are well known ( I've used them on occasion myself), but since AMD isn't particularly forthcoming with their GDDR5 IMC power usage it is flawed to factor in Nvidia's numbers given their more aggressive memory frequency profiles.


medi01 said:


> Idling Fury X consumes 15w more than idling Fury Nano.


Why should that surprise anyone? Nano chips are binned for lower overall voltage, and the GPU idle voltage for Nano is lower than for the Fury X ( 0.9V vs 0.968V). W1zz also noted a 9W discrepancy at the time, so you aren't exactly breaking new ground.


----------



## Vlada011 (Nov 27, 2015)

When I think better Pascal GP100 will cost me as whole X99 platform, memory, GPU, CPU.
But graphic card is most important part.


----------

