Wednesday, September 24th 2014

NVIDIA Sacrifices VESA Adaptive Sync Tech to Rake in G-SYNC Royalties
NVIDIA's G-SYNC technology is rivaled by AMD's project Freesync, which is based on a technology standardized by the video electronics standards association (VESA), under Adaptive Sync. The technology lets GPUs and monitors keep display refresh rates in sync with GPU frame-rates, so the resulting output appears fluid. VESA's technology does not require special hardware inside standards-compliant monitors, and is royalty-free, unlike NVIDIA G-SYNC, which is based on specialized hardware, which display makers have to source from NVIDIA, which makes it a sort of a royalty.
When asked by Chinese publication Expreview on whether NVIDIA GPUs will support VESA adaptive-sync, the company mentioned that NVIDIA wants to focus on G-SYNC. A case in point is the display connector loadout of the recently launched GeForce GTX 980 and GTX 970. According to specifications listed on NVIDIA's website, the two feature DisplayPort 1.2 connectors, and not DisplayPort 1.2a, a requirement of VESA's new technology. AMD's year-old Radeon R9 and R7 GPUs, on the other hand, support DisplayPort 1.2a, casting a suspicion on NVIDIA's choice of connectors. Interestingly, the GTX 980 and GTX 970 feature HDMI 2.0, so it's not like NVIDIA is slow at catching up with new standards. Did NVIDIA leave out DisplayPort 1.2a in a deliberate attempt to check Adaptive Sync?
Source:
Expreview
When asked by Chinese publication Expreview on whether NVIDIA GPUs will support VESA adaptive-sync, the company mentioned that NVIDIA wants to focus on G-SYNC. A case in point is the display connector loadout of the recently launched GeForce GTX 980 and GTX 970. According to specifications listed on NVIDIA's website, the two feature DisplayPort 1.2 connectors, and not DisplayPort 1.2a, a requirement of VESA's new technology. AMD's year-old Radeon R9 and R7 GPUs, on the other hand, support DisplayPort 1.2a, casting a suspicion on NVIDIA's choice of connectors. Interestingly, the GTX 980 and GTX 970 feature HDMI 2.0, so it's not like NVIDIA is slow at catching up with new standards. Did NVIDIA leave out DisplayPort 1.2a in a deliberate attempt to check Adaptive Sync?
114 Comments on NVIDIA Sacrifices VESA Adaptive Sync Tech to Rake in G-SYNC Royalties
Adaptive Sync has been around far longer than G-Sync, just in laptops, and now that it's a display port 1.2a+ standard, it will be open to ALL desktop hardware.
Nvidia had no reason to create G-Sync, they could have done exactly what AMD did and push for adaptive sync, but they chose to create a completely separate proprietary technology. At a hardware level, G-Sync has no real advantages over adaptive or Freesync.
Nvidia only did what they did becasue they are like Apple, they go to great ends to maintain a closed ecosystem and are dead set against open anything in many aspects of their business. It's just how they operate.
PhysX is a perfect example, the engine can run on any hardware, the only reason it won't run on AMD or any other GPU is becasue it's locked, not becasue of any hardware limitation. This is something Nvidia did shortly after buying the engine from Ageia. In fact, you used to be able to run PhysX on an ATI GPU via a modified driver. However Nvidia went to great lengths to prevent this, and now if you want to run PhysX on anything but a pure Nvidia system, you need a hybrid AMD/Nvidia setup and modified driver. The only reason this is not blocked yet is because it's a software level block and there is little Nvidia can do to stop it.
The thing is, there is no point, by locking down PhysX Nvidia has come really close to killing it. The number of games that use it at the GPU level are miniscule, and dropping rapidly, compared to Havok or engine specific physics. Both of which can do anything PhysX can, and are not hardware locked or limited.
More recently, Nvidia has gone so far to lock the libraries used with GameWorks to actually hinder the performance of non-Nvidia based GPU's.
I am not trying to come off as a hater, or fanboy, just pointing out facts.
In my opinion, if this is true, it's a bad move for Nvidia. Hardware and software is moving more toward open standards, and Nvidia no longer rules the discrete GPU world, AMD has a very large share and it's showing no signs of slowing. In the end, this will only hurt Nvidia's business. There will be little to no reason to buy G-Sync over an adaptive sync capable display. There will be fewer displays and models that will support G-Sync over adaptive since it's a standard and G-Sync is not. The G-Sync displays will likely cost more, since the hardware is proprietary, and you will get no real added benefit other than the opportunity to wear your green team tag with pride.
=]
So to sum up, if you want something like LightBoost or ULMB in the future, you'll most likely have to buy a G-Sync monitor as I'm sure ULMB will remain an exclusive feature.
Another possibility is they are going to offer it, but are just not saying so because they don't want to hurt current Gsync sales.
Wanting PhysX but not wanting to pay for it...well, that's like waiting for your neighbour to buy a lawnmower rather than buy one yourself, then turning up on his doorstep to borrow it....and expecting him to provide the gas for it. You mean aside from the fact that you can't buy an Adaptive-Sync monitor at the moment? And Nvidia will most likely adopt Adaptive-Sync once it does become mainstream. At the moment Adaptive Sync is a specification - Nvidia makes no money off a VESA specification, it does however derive benefit from current G-Sync sales.
Tegra 4 had caused in two year time over 400mil operating loss in the tegra division, as the same R/D team is incapable of efficiency. I cannot understand what kind of numbers roll in your head, but to pull any kind of beta silicon, iron it out, feed the binary blob coders and advertising monkeys costs millions...
It won't be profitable ever... Snake oil.
www.blurbusters.com/lightboost-sequel-ultra-low-motion-blur-ulmb/
hardforum.com/showthread.php?t=1812458
I don't know if they changed that yet though.
Most other people who understand how the business actually works realise that halo and peripheral products are tools to enable further sales of mainstream products. Know what else doesn't turn a monetary profit? Gaming development SDK's, software utilities, Mantle(piece), PhysX, NVAPI, and most limited edition hardware. The profit comes through furtherance of the brand.
Why else do you think AMD poured R&D into their gaming program, Mantle(piece), and resources to bring analogues of ShadowPlay and GeForce Experience into being?
For some self-professed business genius you don't seem very astute in the strategy of marketing and selling a brand.
Mantle is just a natural side product of xbone/ps4 SDK development. It does not also require new ASIC's...
Boney... You act like a cheap car saler...
Could you make your argument any weaker? (Don't answer that as I'm sure you'll outdo yourself ;)) Just doing my bit for the proud tradition of internet speculation and my love of the running gag - also an appreciative nod toward your theory on AMD's Forward Thinking™ Future
In theory, it's a really simple process. The card polls the Monitor to find out what it's refresh rate range is. Say it's 30-60 Hz. Any frame that falls withing that range is released immediately and the screen is instructed to refresh with it. If the card is going to slow it will resend the previous frame. To fast and it will buffer it until the monitor is ready.
With that said we'll have to wait until samples are available for testing before we know for sure. It could be worse, or it could be better. When there is competition in the marketplace, the price will come down. As long as nVidia is the only one producing the needed hardware they have no reason to lower prices because they are selling to multiple OEM's.
The water cooling solution is also not designed from zero, they use ASETEK and not again a new ASIC build from scratch.
Okay the battlefield PR... I see you have gone even further in cheap car sales. And how that is connected with gsync? They both invest into game companies.
How is pulling your nose into AMD's PR business connected to gsync R/D costs and profitability of this program...
Boney you left your pelvic bone somewhere...
Besides the statement again, says "Nvidia no longer rules the discrete GPU world", not other markets.
This is obviously beyond you or you just fail to see how business uses brand strategy to maintain and build market share.It's not rocket science. Please feel free to do whatever you're doing, but your audience just decreased by one at least. That another weird Latvian saying, like "pulling my nose"?
You are comparing pocket money costs, like designing AIO... Not making a brand new product....
The saying is just as weird as you operate with facts... Just goofing around with numbers like a cheap car saler...
Because market share has a direct bearing upon revenue, and revenue has a direct bearing on R&D, and R&D has a direct bearing on future products, and future products have a direct bearing upon market share ? You see where it is going ? You really only have to look at the histories of 3dfx, S3, Rendition, Cirrus Logic, Trident, SGI, Tseng Labs, 3DLabs and a host of other IHV's to see what happens when the cycle fails to provide uplift. Likewise, if enough people think the same way then their market share increases, they have more funds for development, and they stay competitive. Matrox won't offer you a card better suited to your needs because they failed to evolve as fast as ATI and Nvidia. The G400 was a great series of cards, but the writing was on the wall when they failed to develop a relevant successor. Everyone (should) buy on features, reliability, and performance - group those everybody's together and you have market share.