Tuesday, September 11th 2018

NVIDIA Reportedly Moves NDA Date for RTX Reviews to September 19th

Videocardz is reporting that NVIDIA has moved their NDA dates for reviews on their RTX 2080 graphics cards to be published. They cite difficulties for review websites in securing samples, delays in shipment, and even unavailable driver stacks that would allow for reviewers to conduct their jobs with the usual professionalism. Remember that the original NDA timeframe for reviews, as reported by Videocardz, was set at September 17th, which would leave reviewers from today with less than a full week to conduct their testing.

The website reports that "only a handful" of reviewers have gotten their cards already, and that reviews for NVIDIA's GeForce RTX 2080 have now lined up with the NDA set for the RTX 2080 Ti, on September 19th, leaving reviewers with two huge card launches and a single deadline, just before the cards' general availability on September 20th.
Source: Videocardz
Add your own comment

50 Comments on NVIDIA Reportedly Moves NDA Date for RTX Reviews to September 19th

#26
robal
qubit@W1zzard Are you able to confirm that this has happened? Presumably NVIDIA would have informed you.
I'm hoping Wizz is too busy benching the crap out of it to reply.
Posted on Reply
#27
cdawall
where the hell are my stars
dj-electricBeing a part of the media itself, receiving over 50 CPUs the past 7-8 years, knowing and talking to many in this industry world-wide, I still stand by my words. RNG is RNG.
The sheer amount of horribly binned CPUs me and others got tops the better than average ones

People love fantasizing about evil-doings of big hardware companies. Seems like it almost turns them on.
The retail sample (hand tested by amd) 1800x I got was mediocre at best. I have had quite a few ES Intel chips and they all were horribly average at best as well.
Posted on Reply
#28
R-T-B
FordGT90ConceptNot at all unless it is overclocked or thermal throttling. :roll:
I mean, in the age of "GPU boost 2.0" we're pretty much always throttling to some degree. Otherwise we'd "boost" to the moon... and possibly burn something.
Posted on Reply
#29
TheoneandonlyMrK
cdawallThe retail sample (hand tested by amd) 1800x I got was mediocre at best. I have had quite a few ES Intel chips and they all were horribly average at best as well.
Well I get what you two are saying but your also talking early silicon, so the best of the first might not be that great compared to a tweaked second or third run, and I don't mean generation the process is continuously tweaked to improve yeilds, hope you're right but not all companies play fair all the time and that's just the way of big business.

Also no need for the over agro rebuttals ,im not calling you anything dave but would like freebies yes , sorry is their someone who wouldn't..
Posted on Reply
#30
JaymondoGB
Give them longer to sell preorders before the balloon goes up.
Posted on Reply
#31
qubit
Overclocked quantum bit
FordGT90ConceptTalking about the terms of the NDA is a violation of the NDA so, if he signed it, he won't talk about it.


These are really big, expensive chips and TSMCs plate is full. This doesn't surprise me at all.
It should be safe to confirm if an NDA expiry date has moved, surely? I'm not asking for any other info.
Posted on Reply
#32
efikkan
theoneandonlymrkEvery chip is binned.

To be as you say fair random chips from the production run would be used , however Nvidia will be dispatching these special review packs direct , and they always play fair right???.
Every microchip is "binned". Binned just means it's sorted in bins. Cherry-picking/golden samples is actually kind of the "opposite"; it's sorting within the bin, or binning inside the bin.

Most reviews conduct testing differently from any built computer. Today's GPUs and CPUs do aggressive boosting and throttling, and small variation in cooling conditions can easily skew the results quite a bit.
Steevo

Like him or not, his math stands up, companies are still sending cherry picked samples.
His math never hold up. This is typical conspiracy material, they always build a train of thought where the initial parts may seem not too far off, doing several fatal presumptions along the way. This theory is not even an apples-to-apples comparison, so it's worthless, there are numerous sources of error:
- Reviews do different stability tests to qualify a chip for a certain speed. One reviewer might say a chip is stable at 5.2 GHz, another does more stability testing and ends up with a more conservative 5.0 GHz on the same sample. I assume the guys at "silicon lottery" have a standardized routine, but this is not the same test reviewers do.
- Reviews do things under different testing conditions, but most do it on an open rig, which will achieve thermals completely different from a closed rig. The temperature and throttling of CPUs and GPUs can be quite different on various test setups, and adds easily a 5% variation.
- Vcore is across reviews is not reliable, it varies from motherboard to motherboard, and can easily vary 0.1-0.2V. GamersNexus did a whole video on this.
- Reviews are conducted under different environmental conditions. Not only ambient temperature, but also pressure and humidity affects the thermal capacity of air.
- Overclocking have many more parameters than just max clock and vcore voltage.

This is more than plenty to discard his "proof". The only way to prove the theory of golden samples to reviewers is to do an apples-to-apples comparison with a good sample size (>=10) under "identical" conditions, otherwise each of them will add a small margin of error, which will stack up and become a large error in the final result.

In conclusion; there is still no evidence that Intel, AMD or Nvidia is shipping golden samples to reviewers.
And just think about it, if they ship golden samples for reviewers for one generation, then they have to ship even better and better golden samples for the next generations, otherwise it will make the next generations look bad. None of these vendors are stupid enough to shoot themselves in the foot like that, so myth busted.
Posted on Reply
#33
TheoneandonlyMrK
efikkanEvery microchip is "binned". Binned just means it's sorted in bins. Cherry-picking/golden samples is actually kind of the "opposite"; it's sorting within the bin, or binning inside the bin.

Most reviews conduct testing differently from any built computer. Today's GPUs and CPUs do aggressive boosting and throttling, and small variation in cooling conditions can easily skew the results quite a bit.


His math never hold up. This is typical conspiracy material, they always build a train of thought where the initial parts may seem not too far off, doing several fatal presumptions along the way. This theory is not even an apples-to-apples comparison, so it's worthless, there are numerous sources of error:
- Reviews do different stability tests to qualify a chip for a certain speed. One reviewer might say a chip is stable at 5.2 GHz, another does more stability testing and ends up with a more conservative 5.0 GHz on the same sample. I assume the guys at "silicon lottery" have a standardized routine, but this is not the same test reviewers do.
- Reviews do things under different testing conditions, but most do it on an open rig, which will achieve thermals completely different from a closed rig. The temperature and throttling of CPUs and GPUs can be quite different on various test setups, and adds easily a 5% variation.
- Vcore is across reviews is not reliable, it varies from motherboard to motherboard, and can easily vary 0.1-0.2V. GamersNexus did a whole video on this.
- Reviews are conducted under different environmental conditions. Not only ambient temperature, but also pressure and humidity affects the thermal capacity of air.
- Overclocking have many more parameters than just max clock and vcore voltage.

This is more than plenty to discard his "proof". The only way to prove the theory of golden samples to reviewers is to do an apples-to-apples comparison with a good sample size (>=10) under "identical" conditions, otherwise each of them will add a small margin of error, which will stack up and become a large error in the final result.

In conclusion; there is still no evidence that Intel, AMD or Nvidia is shipping golden samples to reviewers.
And just think about it, if they ship golden samples for reviewers for one generation, then they have to ship even better and better golden samples for the next generations, otherwise it will make the next generations look bad. None of these vendors are stupid enough to shoot themselves in the foot like that, so myth busted.
Again a little over the top, welcome to Nirvana ,no one cheats here.

Good times indeed.

Oh and I expressed an opinion, chill with the man hunt for a tool ,there are way better conspiracys and these other two guy's opinion ill credit with worth hence i adjusted my stance to fair enough but cheating is possible still from i think it's common.
Posted on Reply
#34
Upgrayedd
jaggerwildLolz@Nvidia FAILURE!!!!!
Uhhh wot.....because controlling the gpu market and having the fastest ones available at whatever price you want because there is no competition seems like the opposite of failure to me...think you meant AMD's GPU division for letting Nvidia be so dominant.
Posted on Reply
#35
moproblems99
theoneandonlymrkI have this down as a decision to stop pre orders being cancelled personally but im a cynical brit.
This is my first thought. Only time will tell.
Posted on Reply
#36
Prima.Vera
The rumors running nowadays are that the Software used to test the RTX cards it's not one of the best, both drivers and apps/games. So they are waiting for some fresh updates.
Maybe Wizz can confirm/deny once all this NDA crap is over...
Posted on Reply
#37
moproblems99
Prima.VeraThe rumors running nowadays are that the Software used to test the RTX cards it's not one of the best, both drivers and apps/games. So they are waiting for some fresh updates.
Maybe Wizz can confirm/deny once all this NDA crap is over...
Not doubting you but there are, what, 3 games that have ray tracing in them? At least one of them won't be released when the cards are released. Considering the XX80 and XX80TI are primarily used for gaming, I don't see 'software that tests RTX cards' is the reason. In other words, games perform like crap when RTX is enabled and/or performance in general is not that much better than 10 series. Or the drivers are not in great shape which would be concerning that release is 9 days away. This seems likely to be a way to reduce preorder cancellation.
Posted on Reply
#38
xorbe
So it's a bonus that I didn't get into the first pre-order batch then, I get an extra week to see results still!
Posted on Reply
#39
RealNeil
ottonomousI would be interested in non-signatories reviewing purchased units.
^*This*^
moproblems99Not doubting you but there are, what, 3 games that have ray tracing in them? At least one of them won't be released when the cards are released.
Maybe it shouldn't all be about ray tracing? For such an expensive series of cards, they should do everything well.
All I want them to do is release and then nudge 1080Ti prices down a little more.
Posted on Reply
#40
cucker tarlson
efikkanHis math never hold up. This is typical conspiracy material, they always build a train of thought where the initial parts may seem not too far off, doing several fatal presumptions along the way. This theory is not even an apples-to-apples comparison, so it's worthless, there are numerous sources of error
You can just look at forums like tpu or ocn and see what frequency users hit at what voltage and then see the early reviews to draw an easy conclusion that 8700k and 8600k that some (not all) of the press got samples that hit 100-200MHz more at same or lower voltage than most people do. Taking into account what you wrote,which is all accurate, I think it was about making sure every review hit the magical 5GHz, it's a psychological thing for the buyer to see. I think they just went through the bins to make sure they didn't send bad ones, which is cherry picking too.
But no,better to conduct an investigation that's flawed all along the way :laugh: Like I said before, this AdoredTV person is B- tech journalist, A+ conspirator.
Posted on Reply
#41
Vayra86
cucker tarlsonQuestion is, how much binning improves performance. A good bin of pascal and an average one will be fractions of fps apart.
Bingo. There isn't. It's been proven that the major factor in performance isn't binning, but cooling on Pascal, and the follow up architectures will not be any different - in fact Nvidia said this themselves, they now employ dual fan stock cooling to make sure it'll boost proper and make use of the extra TDP headroom. They sell that as 'great overclocking potential' while in fact it just means 'less throttling, more boosting'.

So. Even though Intel may bin its 8700K that says nothing about Nvidia's ability to do binning / cherry picking. They literally removed that variance themselves through design. Most of the binning these days is 'marketing' for Nvidia GPUs. Hell, these 'top bins' still run the same BIOS and drivers with the same limitations and all of them run into temperature constraints anyway.

As for the late arrival of review samples, I can see only one real reason and that is damage control because so far, Turing is not really making waves anywhere. They want to max out the pre-order advantage so they deliver stuff late, and people get impatient. If you can't get a few dozen samples out in time, and people here attribute that to 'yield and production issues' you've lost the plot, really. How would they ever sell anything even months from now if they can't do such small volumes today? The reality is, Nvidia benefits from shaky, hastily done reviews. Much easier to spin that than an extensive, well researched piece.

FWIW, again these are all big ass red flags that tell me to stay far away from this steaming pile of garbage.
Posted on Reply
#42
las
1st gen RTX is going to be a failure. Even 2080 Ti is too slow to use it properly.
Skipping this 12nm garbage.
Next GPU for me is 7nm.
Posted on Reply
#43
BluesFanUK
It's so those who pre-ordered don't have sufficient time to cancel when it inevitably disappoints.
Posted on Reply
#44
efikkan
las1st gen RTX is going to be a failure. Even 2080 Ti is too slow to use it properly.

Skipping this 12nm garbage.

Next GPU for me is 7nm.
So a product is a failure because a new feature have limited use? How does this invalidate RTX cards as a product for non-raytracing?
Posted on Reply
#45
Vayra86
efikkanSo a product is a failure because a new feature have limited use? How does this invalidate RTX cards as a product for non-raytracing?
Price point & efficiency + raw performance.
Posted on Reply
#46
efikkan
Vayra86Price point & efficiency + raw performance.
So far, the only valid complaint about Turing is the pricing. All indicators so far points to improved performance and efficiency.
How can it be bad to have a feature you don't need?
Did you return your GCN card when AMD dropped Mantle? Do you go to a car dealer and refuse to buy a car with a towing hitch because you don't need it?
Posted on Reply
#47
Vayra86
efikkanSo far, the only valid complaint about Turing is the pricing. All indicators so far points to improved performance and efficiency.
How can it be bad to have a feature you don't need?
Did you return your GCN card when AMD dropped Mantle? Do you go to a car dealer and refuse to buy a car with a towing hitch because you don't need it?
Those three metrics are closely related though - performance per dollar matters and Turing, despite its alleged improved efficiency, does not improve on that at all. So there is very little reason to buy into it if you already have that raw performance today. If my car with a towing hitch cost 30% more and was otherwise precisely the same as what I had, would that be a good deal? Or - would you consider it a step forward?
Posted on Reply
#48
efikkan
Vayra86Those three metrics are closely related though - performance per dollar matters and Turing, despite its alleged improved efficiency, does not improve on that at all. So there is very little reason to buy into it if you already have that raw performance today. If my car with a towing hitch cost 30% more and was otherwise precisely the same as what I had, would that be a good deal? Or - would you consider it a step forward?
Paying more for the same "goods" is never a step forward. But why do you pretend like Turing is not a major step forward in performance? Which benchmarks are you basing this on?
Posted on Reply
#49
las
efikkanSo a product is a failure because a new feature have limited use? How does this invalidate RTX cards as a product for non-raytracing?
The performance increase from Pascal to Turing in non-RTX workloads is a joke and 7nm will be out sooner than later.

AMD is releasing 7nm GPU's this year.
Nvidia will follow next year.

Turing on 12nm is a milking move because of no competition. AMD 7nm GPU's will change this.
Posted on Reply
#50
efikkan
lasThe performance increase from Pascal to Turing in non-RTX workloads is a joke and 7nm will be out sooner than later.
Based on which benchmarks?

Nvidia is claiming 35-45% performance gains, even if the average is closer to ~30%, this would be a significant improvement. This is still excellent in a historical perspective.
lasAMD is releasing 7nm GPU's this year.
More like "paper launching" Vega20 for the professional market. The shipped quantities will be very low and this will not be a consumer card.
Even at 7 nm, AMD can't compete with Pascal at 16 nm.
lasTuring on 12nm is a milking move because of no competition. AMD 7nm GPU's will change this.
AMD's upcoming Navi is targeting "Vega level performance" in mid- to late 2019.

Nvidia have no choice other than launching Turing on "12 nm", the alternative would be to postpone it for one more year.
Posted on Reply
Add your own comment
Dec 19th, 2024 20:53 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts