# NVIDIA Shares Details About Ampere Founders Edition Cooling & Power Design - 12-pin Confirmed



## btarunr (Aug 26, 2020)

NVIDIA today shared the design philosophy behind the cooling solution of its next-generation GeForce "Ampere" RTX 3080 / 3090 graphics cards, which we'll hopefully learn more about on September 1, when NVIDIA has scheduled a GeForce Special Event. Part of the new video presentation shows the evolution of NVIDIA's cooling solutions over the years. NVIDIA explains the four pillars behind the design, stressing that thermals are at the heart of its innovation, and that the company looks to explore new ways to use air-cooling more effectively to cool graphics cards. To this effect, the cooling solution of the upcoming GeForce Ampere Founders Edition graphics cards features an airflow-optimized design focused on ensuring the most effective way to take in fresh air, transfer heat to it, and exhaust the warm air in the most optimal manner. 

The next pillar of NVIDIA's cooling technology innovation is mechanical structure, to minimize the structural components of the cooler without compromising on strength. The new Founder Edition cooler introduces a new low profile leaf spring that leaves more room for a back cover. Next up is reducing the electrical clutter, with the introduction of a new 12-pin power connector that is more compact, consolidates cabling, and yet does not affect the card's power delivery capability. The last pillar is product design, which puts NVIDIA's innovations together in an airy new industrial design. The video presentation includes commentary from NVIDIA's product design engineers who explain the art and science behind the next GeForce. NVIDIA is expected to tell us more about the next generation GeForce Ampere at a Special Event on September 1.



 

 

 






Although the video does not reveal any picture of the finished product, the bits and pieces of the product's wire-frame model, and the PCB wire-frame confirm the design of the Founders Edition which has been extensively leaked over the past few months. NVIDIA mentioned that all its upcoming cards that come with 12-pin connector include free adapters to convert standard 8-pin PCIe power connectors to 12-pin, which means there's no additional cost for you. We've heard from several PSU vendors who are working on adding native 12-pin cable support to their upcoming power supplies.

The promise of backwards compatibility has further implications: there is no technical improvement—other than the more compact size. If the connector works through an adapter cable with two 8-pins on the other end, its maximum power capability must be 2x 150 W, at the same current rating as defined in the PCIe specification. The new power plug will certainly make graphics cards more expensive, because it is produced in smaller volume, thus driving up BOM cost, plus the cost for the adapter cable. Several board partners hinted to us that they will continue using traditional PCIe power inputs on their custom designs.



 

 

 

The NVIDIA presentation follows.










*View at TechPowerUp Main Site*


----------



## Caring1 (Aug 26, 2020)

All those drawings look very sketchy


----------



## Vya Domus (Aug 26, 2020)

Very reminiscent of Apple's style, the same kind of "we designed a screw for 10 years to make it perfect" type of thing.


----------



## QUANTUMPHYSICS (Aug 26, 2020)

I'm gonna sell my 2080Ti on Ebay and get as much as possible for it. 
Definitely going for the 3090, as my PSU can handle it. 
Gonna buy it on my card, get the Rewards Flyer points for it and then write the whole thing off as a business expense.


----------



## bug (Aug 26, 2020)

Vya Domus said:


> Very reminiscent of Apple's style, the same kind of "we designed a screw for 10 years to make it perfect" type of thing.


I fail to see the connection. If Nvidia patented the design, leaving us with a proprietary connector, that would be an Apple-worthy movement.
But designing a screw for 10 years doesn't seem to have anything to do with what we're looking at here.


----------



## Chomiq (Aug 26, 2020)

So a single 12-pin will pull power from a single 8-pin pci-e cable?


----------



## medi01 (Aug 26, 2020)

QUANTUMPHYSICS said:


> I'm gonna sell my 2080Ti on Ebay and get as much as possible for it.


Not Titan RTX? Oh you poor homeless hippy...


----------



## W1zzard (Aug 26, 2020)

Chomiq said:


> So a single 12-pin will pull power from a single 8-pin pci-e cable?



2x 8-pin. 150 W max power for the card (+75 W slot) would just not be enough


----------



## kayjay010101 (Aug 26, 2020)

Chomiq said:


> So a single 12-pin will pull power from a single 8-pin pci-e cable?


That would equal a max of 225W, so doubtful. More likely 2x8 to 1x12, which would be 375W.


----------



## Chomiq (Aug 26, 2020)

W1zzard said:


> 2x 8-pin. 150 W max power for the card (+75 W slot) would just not be enough


Yeah thx for clarifying, missed that bit.


----------



## kiriakost (Aug 26, 2020)

Chomiq said:


> So a single 12-pin will pull power from a single 8-pin pci-e cable?



You are lucky,  I am an electrician (plus electronics) and I did my research about wires and connectors wattage, very recently. 
Its pair of wires (among  molex pins in use ) this is designed to transfer 5A  for 24/24 operation.
NVIDIA has the thought to use up to 30A from the new connector, but this is a planning ahead.
They are mentioning about a supplied adaptor (cables), this can use two 8Pin them able to deliver 15A its one.  

According max specifications of a pair of wires among molex pin maximum current specifications, this is at 7A. 
I do not think that they will push the envelope that far for now.


----------



## john_ (Aug 26, 2020)

So, they did this?






With a cut in the PCB to make the card shorter.


----------



## Dammeron (Aug 26, 2020)

john_ said:


> So, they did this?
> 
> View attachment 166741
> 
> With a cut in the PCB to make the card shorter.


PCB has a cutout, but there will be another PCB with VRM on it.


----------



## Chrispy_ (Aug 26, 2020)

Don't bother watching the video, it's 8:16 of utter drivel with not a single mention of Ampere's cooler. *The thumbnail that shows Ampere's cooling flow is 100% youtube clickbait.* with just a couple of relvant seconds that could have just been a static image at ~2:32

Nvidia really are douchebags.


----------



## john_ (Aug 26, 2020)

Dammeron said:


> PCB has a cutout, but there will be another PCB with VRM on it.


 That cutout brings the rear fan closer keeping the cards length shorter.


----------



## btarunr (Aug 26, 2020)

john_ said:


> So, they did this?
> 
> View attachment 166741
> 
> With a cut in the PCB to make the card shorter.


Yup, basically that.


----------



## Aretak (Aug 26, 2020)

bug said:


> I fail to see the connection. If Nvidia patented the design, leaving us with a proprietary connector, that would be an Apple-worthy movement.
> But designing a screw for 10 years doesn't seem to have anything to do with what we're looking at here.


Sure it does, in that both are marketing BS to rope in the gullible. If anyone knows of a bookmaker offering odds on third party Ampere cards with "traditional" air coolers completely whipping this overdesigned pile of nonsense for both noise and thermals, I'd like to put my life savings on it.


----------



## iO (Aug 26, 2020)

I can see a lot of ripped off 12pin connectors if it isn't somehow also supported by the fan shroud...


----------



## bug (Aug 26, 2020)

Aretak said:


> Sure it does, in that both are marketing BS to rope in the gullible. If anyone knows of a bookmaker offering odds on third party Ampere cards with "traditional" air coolers completely whipping this overdesigned pile of nonsense for both noise and thermals, I'd like to put my life savings on it.


Rope into what? Whoever want to buy this card will buy it regardless of the cooler used.



Chomiq said:


> So a single 12-pin will pull power from a single 8-pin pci-e cable?


Nope. 2 8-pin -> 1 12-pin
Look up pictures, they should be all over the place by now.


----------



## Xex360 (Aug 26, 2020)

They seem to be very knowledgeable till you see their designs.
Better stick with blower design and price competitively, while not very good it's very versatile and can be use in all types of cases, and let people who really know what they are doing create good Colling solutions for their cards.


----------



## Vayra86 (Aug 26, 2020)

Vya Domus said:


> Very reminiscent of Apple's style, the same kind of "we designed a screw for 10 years to make it perfect" type of thing.



Yep, it seems Nvidia has officially gone full mental.

Not sure I'm all in on this push they want. Quite sure I'm not, in fact. Nobody ever asked us if we wanted to sacrifice all the efficiency gains (in die size, power, etc.) of the last generations for RT. That better be something else... so far... is it?


----------



## Dredi (Aug 26, 2020)

Dammeron said:


> PCB has a cutout, but there will be another PCB with VRM on it.


No, just a single PCB. The VRM is clearly seen in the cad image.


----------



## medi01 (Aug 26, 2020)

Vayra86 said:


> Nobody ever asked us if we wanted to *sacrifice *all the efficiency gains (in die size, power, etc.) of the last generations *for RT*


Where does that "for RT" come from, pretty please?

Did you just call "The Leather Man pissed off TSMC" "sacrifice for RT"???


----------



## Vayra86 (Aug 26, 2020)

W1zzard said:


> 2x 8-pin. 150 W max power for the card (+75 W slot) would just not be enough



I'm curious, what is your take on their reason to increase the TDP so heavily?


----------



## steen (Aug 26, 2020)

Doubtless a nice piece of design, but necessitated by high power consumption. 8 layer PCB, back drilled VIAs, high compoent density/quality -> high BOM. They're explicit in the video that the way to get higher performance is through higher power use. Will be telling for efficiency.


----------



## thesmokingman (Aug 26, 2020)

The 12pin still looks ridiculously stupid and a huge waste given the freakishly large size. Like they couldn't find the real estate on there?


----------



## Caring1 (Aug 26, 2020)

john_ said:


> So, they did this?
> 
> View attachment 166741


Yes, they reversed the picture so Sapphire was spelt backwards.


----------



## AnarchoPrimitiv (Aug 26, 2020)

QUANTUMPHYSICS said:


> I'm gonna sell my 2080Ti on Ebay and get as much as possible for it.
> Definitely going for the 3090, as my PSU can handle it.
> Gonna buy it on my card, get the Rewards Flyer points for it and then write the whole thing off as a business expense.



Wow, that's a relief.... I've been loosing sleep from constantly worrying over what you intended to do with your old graphics card and whether you you were going to get that 3090.  I think I speak for the entire community when I say that.



Vayra86 said:


> I'm curious, what is your take on their reason to increase the TDP so heavily?



 Well, I've read at very sources that the 3090 will use 390+ watts, with AIBs going above that.... There's already leaks showing three 8 pin connections in AIB models.  According to moore's law is dead (if you have a negative opinion of him, keep it to yourself unless it's based on some empirical data or evidence that clearly warrants that negative opinion) Nvidia had to up the TDP significantly because they're that concerned about RDNA2.  Furthermore, he sais that while AMD may not take the top performance crown, that RDNA2 should be more efficient and that second biggest RDNA2, based on what his leaks tell him, should be the star of the next gen cards, offering the best performance per dollar and watt and that it should allow AMD to lock up the upper modtier/lower top tier of videocard....he also said that he strongly encourages everyone to wait until RDNA2 is released before making any purchases, and this is looking like Radeon's Zen2 moment.


----------



## Vayra86 (Aug 26, 2020)

AnarchoPrimitiv said:


> Wow, that's a relief.... I've been loosing sleep from constantly worrying over what you intended to do with your old graphics card and whether you you were going to get that 3090.  I think I speak for the entire community when I say that.
> 
> 
> 
> Well, I've read at very sources that the 3090 will use 390+ watts, with AIBs going above that.... There's already leaks showing three 8 pin connections in AIB models.  According to moore's law is dead (if you have a negative opinion of him, keep it to yourself unless it's based on some empirical data or evidence that clearly warrants that negative opinion) Nvidia had to up the TDP significantly because they're that concerned about RDNA2.  Furthermore, he sais that while AMD may not take the top performance crown, that RDNA2 should be more efficient and that second biggest RDNA2, based on what his leaks tell him, should be the star of the next gen cards, offering the best performance per dollar and watt and that it should allow AMD to lock up the upper modtier/lower top tier of videocard....he also said that he strongly encourages everyone to wait until RDNA2 is released before making any purchases, and this is looking like Radeon's Zen2 moment.



Well, if all that is true, I suppose 2020 really is full of surprises.


----------



## bug (Aug 26, 2020)

Vayra86 said:


> I'm curious, what is your take on their reason to increase the TDP so heavily?


What increase? This is still equivalent to using 2 8-pin connectors which we've had for years.


----------



## jesdals (Aug 26, 2020)

Well I did not see the new cooler design in that video?


----------



## Vayra86 (Aug 26, 2020)

bug said:


> What increase? This is still equivalent to using 2 8-pin connectors which we've had for years.



Max power draw perhaps, but it also looks like a pretty beefed up cooler, doesn't suggest we'll stick to 250W... There is 375 available.



jesdals said:


> Well I did not see the new cooler design in that video?



A tiny, tiny glimpse in the last few seconds. This presentation is a whole lot of nothing  Gotta get that hype train rollin


----------



## Chrispy_ (Aug 26, 2020)

jesdals said:


> Well I did not see the new cooler design in that video?





Vayra86 said:


> This presentation is a whole lot of nothing  Gotta get that hype train rollin


Clickbait is a scourge.
https://www.techpowerup.com/forums/...-design-12-pin-confirmed.271410/#post-4335211


----------



## ppn (Aug 26, 2020)

Part of the backside is shown for 1 second at the very end. There may be other hidden messages, apart from the ridiculously standing up 12 pin next to the PCB fish tail cutout.


----------



## bug (Aug 26, 2020)

Vayra86 said:


> Max power draw perhaps, but it also looks like a pretty beefed up cooler, doesn't suggest we'll stick to 250W... There is 375 available.


Yeah, that cooler... I don't buy 3-slot video cards on principle.

One more week of guessing and then we'll see.
My money's on the 3090 being some freak like the 2080Ti and rest of the cards being much saner. But still priced close to their Turing counterparts


----------



## Vayra86 (Aug 26, 2020)

bug said:


> Yeah, that cooler... I don't buy 3-slot video cards on principle.
> 
> One more week of guessing and then we'll see.
> My money's on the 3090 being some freak like the 2080Ti and rest of the cards being much saner. But still priced close to their Turing counterparts



Yeah its the only way out for them lol. Agreed


----------



## TheDeeGee (Aug 26, 2020)

Standing up 12-Pin, R.I.P. Arctic Cooling.


----------



## Jinxed (Aug 26, 2020)

I'd like to remind you, that high power draw, unlike in AMD's case, does not mean low power efficiency. It may very well be that with Ampere Nvidia increased power efficiency by 50% or even 100% AND at the same time they decided to feed a lot more power to the GPU, just because they can. What would that mean in the end? Extreme performance. If you double your performance per watt and at the same time double the power input, you get quadruple performance. All you need is a GPU that can take it.

It remains to be seen how much Nvidia really improved perf/power efficiency and how much they increased power input (all we have now are just rumours). But just because your AMD has to drive the clocks of their GPUs well past their optimal efficiency point in order to be at least remotely competitive, it doesn't mean Nvidia is doing the same.

_Edited to remove trolling/name-calling that does not conform with forum guidelines. Please keep this in mind when posting here in the future. - TPU Moderation Team_


----------



## chodaboy19 (Aug 26, 2020)

People are always afraid of change, the initial reaction always follows the Kübler-Ross model with the following order emotions:  denial, anger, bargaining, depression, and acceptance.


----------



## Jinxed (Aug 26, 2020)

chodaboy19 said:


> People are always afraid of change, the initial reaction always follows the Kübler-Ross model with the following order emotions:  denial, anger, bargaining, depression, and acceptance.


So true!


----------



## iO (Aug 26, 2020)

Chrispy_ said:


> Clickbait is a scourge.
> https://www.techpowerup.com/forums/...-design-12-pin-confirmed.271410/#post-4335211



It's right there at ~2:25. Also no crazy airflow apocalypse in the case as some suggested. Back fan is in pull config. Edit: Or they revised the design and put the second fan on the front like a sane person would do...


----------



## Jism (Aug 26, 2020)

kiriakost said:


> You are lucky,  I am an electrician (plus electronics) and I did my research about wires and connectors wattage, very recently.
> Its pair of wires (among  molex pins in use ) this is designed to transfer 5A  for 24/24 operation.
> NVIDIA has the thought to use up to 30A from the new connector, but this is a planning ahead.
> They are mentioning about a supplied adaptor (cables), this can use two 8Pin them able to deliver 15A its one.
> ...



The PCI-E specification are so super safe, that you can push alot more through then intended. A capable PSU, connectors, wires and video cards can pull way more then the advertised 75/150W. I mean even my oc'ed 580 did manage to pull 22A from one single 8 pin connector, it got warm, yes, lol.





But if you think about it, why Nvidia introduced this "one" connector, it's designed for the enterprise market, and simply pushed over to the gaming part. They no longer have to make 2 different model(s) of cards for both enterprise and / or consumer. The cards that do not pass the enterprise quality are moved over to the gamer ones. Nothing really gets lost in these markets.


----------



## M2B (Aug 26, 2020)

According to the leaked benchmarks (Time Spy Extreme) the 3090 is up to 56% faster than 2080Ti, even if it consumes 350W it should end up being 15% or so more efficient which is not bad if Samsung's 10nm is being used, which is miles behind TSMC's N7P process in performance and efficiency.
If their cooling solution can indeed handle the 350W TDP with acceptable thermals and noise levels then they've done an amazing job.


----------



## Jinxed (Aug 26, 2020)

M2B said:


> According to the leaked benchmarks (Time Spy Extreme) the 3090 is up to 56% faster than 2080Ti, even if it consumes 350W it should end up being 15% or so more efficient which is not bad if Samsung's 10nm is being used, which is miles behind TSMC's N7P process in performance and efficiency.
> If their cooling solution can indeed handle the 350W TDP with acceptable thermals and noise levels then they've done an amazing job.


And those results are also a bit dated now, probably made with pre-production versions of the card with testing drivers. It may have improved a lot since then.


----------



## Turmania (Aug 26, 2020)

If they can make rtx 3070 around 2080ti performance but around 150w power consumption. Then game over for AMD.


----------



## Emu (Aug 26, 2020)

Jism said:


> The PCI-E specification are so super safe, that you can push alot more through then intended. A capable PSU, connectors, wires and video cards can pull way more then the advertised 75/150W. I mean even my oc'ed 580 did manage to pull 22A from one single 8 pin connector, it got warm, yes, lol.
> 
> View attachment 166753
> 
> But if you think about it, why Nvidia introduced this "one" connector, it's designed for the enterprise market, and simply pushed over to the gaming part. They no longer have to make 2 different model(s) of cards for both enterprise and / or consumer. The cards that do not pass the enterprise quality are moved over to the gamer ones. Nothing really gets lost in these markets.



If people are worried about their cards pulling over 150W per 8 pin connector then they really shouldn't look up the r9 290x2 that pulled up to 225W per 8 pin connector lol


----------



## KarymidoN (Aug 26, 2020)

W1zzard said:


> 2x 8-pin. 150 W max power for the card (+75 W slot) would just not be enough



Basically they're saying the *Max Power draw* from a card with 1x 12pin new connector is 375W? (2x 150W 8pin + 75W From PCIE Power) right?

lets see what the real power draw is (after reviews), i hope they just left a lot more capacity on the connector to be used.


----------



## chodaboy19 (Aug 26, 2020)

KarymidoN said:


> Basically they're saying the max Power draw from a card with 1x 12pin new connector is 675W? (2x 300W 8pin + 75W From PCIE Power)?
> lets see what the real power draw is, i hope they just left a lot more capacity on the connector to be used.



No, Nvidia hasn't released any power consumption data. These numbers are just what people are guessing.

But it's assumed the power consumption can reach: (150W x 2 ) + 75W = 375W


----------



## Dante Uchiha (Aug 26, 2020)

Even though it is an abnormally expensive product, it will be interesting to see how efficient this exotic cooling is.  Where's AMD's answer?


----------



## psyclist (Aug 26, 2020)

So they stole Sapphires design, Kudos Nvidia lol.  For those tower cooler folks, this feeds that hot air directly into the Tower Intake, how does that affect CPU temps I wonder? 375W is gonna make for some pretty hot air being fed directly into the CPU cooling solution


----------



## Xuper (Aug 26, 2020)

So all heats will be dumped into CPU? wow what a innovative !


----------



## TheoneandonlyMrK (Aug 26, 2020)

bug said:


> What increase? This is still equivalent to using 2 8-pin connectors which we've had for years.


What the same two eight pin power wires that caused Vega to get ripped in forums the world over.
Is that ok now, right.

As for the Oop, Details my asssss, we know about a spring now ,cheers.


----------



## Chrispy_ (Aug 26, 2020)

chodaboy19 said:


> People are always afraid of change, the initial reaction always follows the Kübler-Ross model with the following order emotions:  denial, anger, bargaining, depression, and acceptance.


I'm not sure I'm ever going to accept a $1400 graphics card. I'll probably end up buying a dozen for work but that's not something I'd willingly spend from my own funds - I am definitely a "performance/Watt" sweet-spot seeker.


----------



## phanbuey (Aug 26, 2020)

I hope they release a stubby watercooled version


----------



## Fluffmeister (Aug 26, 2020)

phanbuey said:


> I hope they release a stubby watercooled version



And give it a meanish but cute sounding name.... Furry X!


----------



## ppn (Aug 26, 2020)

The full cover water block would look realy nice on the fish tail short PCB.


----------



## mouacyk (Aug 26, 2020)

ppn said:


> The full cover water block would look realy nice on the fish tail short PCB.


Don't worry, EK would have machined it to match.


----------



## FreedomEclipse (Aug 26, 2020)

Am i the only one that thought the video didnt explain anything of substance?? I was waiting for them to talk more about the cooling solution and they did but only how vapour chambers work not how it would work in relation to the new GPu.

They used big fancy complicated words to make what they were doing sound majorly impressive but they didnt exactly give a whole lot away. I dont if they did this deliberately to keep people on the edge of their seat or if they are talking down to me because they dont think i know how the science works.


----------



## Chrispy_ (Aug 26, 2020)

iO said:


> It's right there at ~2:25. Also no crazy airflow apocalypse in the case as some suggested. Back fan is in pull config. Edit: Or they revised the design and put the second fan on the front like a sane person would do...
> View attachment 166751


OMG, I must have looked away for 5 seconds. He was talking about computation fluid dynamics with an unrelated single-fan blower from 10-series and prior on screen, then the money shot which I missed and then another 35 seconds of unrelated 2080Ti.

I guess it's not technically clickbait, just a video that has five seconds of content (arguably that one screenshot is all that matters) and 8 minutes, 14 seconds of "padding"....


----------



## londiste (Aug 26, 2020)

thesmokingman said:


> The 12pin still looks ridiculously stupid and a huge waste given the freakishly large size. Like they couldn't find the real estate on there?


Come again?








						NVIDIA 12-pin Connector Pictured Next to 8-pin PCIe - It's Tiny
					

Over the weekend, we got some of the first pictures of a production-grade NVIDIA 12-pin graphics card power connector that debuts with the company's GeForce "Ampere" Founders Edition graphics cards. HardwareLuxx.de received a set of modular cables by Seasonic that can be plugged into the...




					www.techpowerup.com
				





Spoiler


----------



## KarymidoN (Aug 26, 2020)

chodaboy19 said:


> No, Nvidia hasn't released any power consumption data. These numbers are just what people are guessing.
> 
> But it's assumed the power consumption can reach: (150W x 2 ) + 75W = 375W



mb i made a typo. 300W Connector + 75W from PCIE, i don't understand Why the box with the adaptor that seasonic was shipping said "850W PSU recommended" that led me to believe this cards would be more power hungry, most 650W Gold level PSU's will do just fine if you're not overclocking this cards then. why 850W recomendation from seasonic?


----------



## steen (Aug 26, 2020)

M2B said:


> According to the leaked benchmarks (Time Spy Extreme) the 3090 is up to 56% faster than 2080Ti, even if it consumes 350W it should end up being 15% or so more efficient


Closer to 12% if 2080ti FE is 250W. If 3090 TBP is 320W, then we're looking at 22%. TSE is probably not a great metric here.



> which is not bad if Samsung's 10nm is being used, which is miles behind TSMC's N7P process in performance and efficiency.
> If their cooling solution can indeed handle the 350W TDP with acceptable thermals and noise levels then they've done an amazing job.


You think so? If the node shrink was from TSMC 16(12N) to Samsung 10(8LPP), they've bumped TBP 40% (350W speculated) to get a 55% increase in TSE, with commensurate increase in BOM, etc. The majority of the gain is from increased power draw. Naively, a die shrunk TU102 would've yielded similar results. I suspect the 24x GDDR6X modules are adding disproportionately & don't benefit most games yet. Will be great to see TPU perf/watt gaming results of GA102/4 SKUs esp AIB models.


----------



## nangu (Aug 26, 2020)

FreedomEclipse said:


> Am i the only one that thought the video didnt explain anything of substance?? I was waiting for them to talk more about the cooling solution and they did but only how vapour chambers work not how it would work in relation to the new GPu.
> 
> They used big fancy complicated words to make what they were doing sound majorly impressive but they didnt exactly give a whole lot away. I dont if they did this deliberately to keep people on the edge of their seat or if they are talking down to me because they dont think i know how the science works.



I tought the same.

So, Nvidia "invented" a new slim power connector, and at the same time designed a three slot monstruosity. I don't know if they are affraid of RDNA2 or what else, but I think something is happening here because they try to focus a lot (with fancy words ala Apple of course) on heat dissipation and power consumption for performance in that video.

I hope the medium tier cards will offer a good performance bump at the same power as current 2070/2080 at least.


----------



## Fluffmeister (Aug 26, 2020)

Let's hope it's RDNA2, but the next game reveals looked surprisingly mediocre, but I appreciate that isn't AMD's fault.


----------



## dragontamer5788 (Aug 26, 2020)

nangu said:


> So, Nvidia "invented" a new slim power connector, and at the same time designed a three slot monstruosity. I don't know if they are affraid of RDNA2 or what else, but I think something is happening here because they try to focus a lot (with fancy words ala Apple of course) on heat dissipation and power consumption for performance in that video.



No. NVidia has just decided to call the "Titan" the 3090 instead. While still commanding "Titan" class prices.

Same thing with Intel inventing the "i9", when they've called their "i7" the top-end for nearly a decade. By changing the name, you screw with the psychology of humans, and make people think they can afford something higher. After all, the 3080 isn't the high end anymore. That's "just" a $1200 GPU, you're reasonable. The unreasonable people are the ones buying a $2000 3090.

Classic marketing trick. Its called decoy pricing, and NVidia (and really everyone: Intel / AMD / etc. etc.) have been doing it for decades.


----------



## TheoneandonlyMrK (Aug 26, 2020)

nangu said:


> I tought the same.
> 
> So, Nvidia "invented" a new slim power connector, and at the same time designed a three slot monstruosity. I don't know if they are affraid of RDNA2 or what else, but I think something is happening here because they try to focus a lot (with fancy words ala Apple of course) on heat dissipation and power consumption for performance in that video.
> 
> I hope the medium tier cards will offer a good performance bump at the same power as current 2070/2080 at least.


I like how one whole tier is dedicated to the way stuff is assembled and they only now tidied up power delivery with a new plug, those adapter's are not going to be tidier, are not very hide able.
Surely they can't patent a molex.

Seasonic could be leaning on a properly built single rail psu designed to handle more current per wire=850 watt, most PSU of a quality would be upto that.

All mine could/have.


----------



## nangu (Aug 26, 2020)

dragontamer5788 said:


> No. NVidia has just decided to call the "Titan" the 3090 instead. While still commanding "Titan" class prices.
> 
> Same thing with Intel inventing the "i9", when they've called their "i7" the top-end for nearly a decade. By changing the name, you screw with the psychology of humans, and make people think they can afford something higher. After all, the 3080 isn't the high end anymore. That's "just" a $1200 GPU, you're reasonable. The unreasonable people are the ones buying a $2000 3090.
> 
> Classic marketing trick. Its called decoy pricing, and NVidia (and really everyone: Intel / AMD / etc. etc.) have been doing it for decades.



You're right, didn't see the 3090 launch as a Titan, and it has a lot of sense that way.


----------



## TheoneandonlyMrK (Aug 26, 2020)

nangu said:


> You're right, didn't see the 3090 launch as a Titan, and it has a lot of sense that way.


That's been a tuber rumour for a while now, I could point you to them, allegedly because it's not performant enough to what Nvidia expect big Navi to be, win win since they're Titan can come later for more money obviously.


----------



## Chrispy_ (Aug 26, 2020)

Fluffmeister said:


> Let's hope it's RDNA2, but the next game reveals looked surprisingly mediocre, but I appreciate that isn't AMD's fault.


Raytracing is so expensive even with Ampere's supposedly 4x DXR performance increase, we're still looking at faking it being a decent option.

I always like to refer back to *this video*, when BF5's raytracing was at its highest quality. DICE later improved performance by dialling the RTX quality back a bit, and the patched version was definitely worth the small fidelity loss for such a significant performance increase.

I mean, even when raytracing settings were set unrealistically high by DICE - so high that a 2080Ti was required to hit 60fps at 1080p - it was still only marginally better than faking it with shaders. Yes, if you stopped playing the game and actually just zoomed in on fine details, the DXR renderer was better looking. It's just that the cost was too high for such a subtle improvement.

You only have to play QuakeII RTX and experiment with the temperal filtering, GI ray count, and de-noiser to get an idea of just how basic an aproximation of raytracing current DXR implementations are. There's almost as much fakery and guesswork going on with DXR as the shader-based fakery we're already used to.


----------



## steen (Aug 26, 2020)

nangu said:


> So, Nvidia "invented" a new slim power connector,



Heh, 12-pin Molex micro-fit 3.0. They will have specced the pinout, though.



> don't know if they are affraid of RDNA2 or what else


There may be something to this, but I think Nv built 3090 because they could (& price it accordingly). More product tiers. Renxun is keen on Ferrari analogies...



dragontamer5788 said:


> No. NVidia has just decided to call the "Titan" the 3090 instead. While still commanding "Titan" class prices


Instead...? Titan will likely release in the Super refresh cycle once 16Gbit GDDR6X modules are available. I wouldn't be surprised if Titan/Quadro are released with 48GB GDDR6 @ ~800GB/s initially.


----------



## JustAnEngineer (Aug 26, 2020)

Rumor puts that huge RTX 3090 cooler at 310 mm long, which is 5mm more than I have  available in my new case.


----------



## Jinxed (Aug 26, 2020)

Chrispy_ said:


> Raytracing is so expensive even with Ampere's supposedly 4x DXR performance increase, we're still looking at faking it being a decent option.
> 
> I always like to refer back to *this video*, when BF5's raytracing was at its highest quality. DICE later improved performance by dialling the RTX quality back a bit, and the patched version was definitely worth the small fidelity loss for such a significant performance increase.
> 
> ...


And you are basing that on what? There's nothing fake about the current raytracing implementation. It is and always was about the resolution. Just like old gen graphics were starting at 320x240, going through 640x480 all the way up to the 4k we have now, raytracing is going through that same path. It's about how many rays per pixel you can cast. Essentially you get a low res, high noise picture, which is the basis for GI, reflections or shadows. There's nothing fake about it, you're just dealing with the lack of data and noise, just like the low resolutions in the old times of gaming. Newer gens of cards will have more power, will be able to cast more rays per pixel, improving the "resolution", the actual quality of the raytraced output. Raytracing can produce photorealistic output if you don't need real time output. That means you can cast hundreds of rays per pixel and wait for it to be computed. Metro Exodus was if I remember correctly 1 ray per pixel due to their checkerboarding approach. Denoising makes that into something useful. Even such a small sample rate is already noticeably better that traditional rasterization. Now imagine 4 rays per pixel. That's gonna be a massive improvement.


----------



## medi01 (Aug 26, 2020)

Jinxed said:


> Even such a small sample rate is already noticeably better that traditional rasterization.


Oh, is it?
Better than "traditional rasterization" (I guess it means non DXR) in which game?


----------



## TheoneandonlyMrK (Aug 26, 2020)

Jinxed said:


> And you are basing that on what? There's nothing fake about the current raytracing implementation. It is and always was about the resolution. Just like old gen graphics were starting at 320x240, going through 640x480 all the way up to the 4k we have now, raytracing is going through that same path. It's about how many rays per pixel you can cast. Essentially you get a low res, high noise picture, which is the basis for GI, reflections or shadows. There's nothing fake about it, you're just dealing with the lack of data and noise, just like the low resolutions in the old times of gaming. Newer gens of cards will have more power, will be able to cast more rays per pixel, improving the "resolution", the actual quality of the raytraced output. Raytracing can produce photorealistic output if you don't need real time output. That means you can cast hundreds of rays per pixel and wait for it to be computed. Metro Exodus was if I remember correctly 1 ray per pixel due to their checkerboarding approach. Denoising makes that into something useful. Even such a small sample rate is already noticeably better that traditional rasterization. Now imagine 4 rays per pixel. That's gonna be a massive improvement.


Err it's real now, no it's alllllll fake, were quite far out from real and will need way more than Rtx DxR for that.

He probably based that on trying it because that's my opinion as an owner.

It's the software equivalent of 3d TV at the moment , initially oohhh nice, then two weeks later max, meh bothered and nothing to watch


----------



## M2B (Aug 26, 2020)

medi01 said:


> Oh, is it?
> Better than "traditional rasterization" (I guess it means non DXR) in which game?



That's not traditional rasterization, that demo uses some form of Tracing for the global illumination system in fact.


----------



## kiriakost (Aug 26, 2020)

Jism said:


> The PCI-E specification are so super safe, that you can push alot more through then intended. A capable PSU, connectors, wires and video cards can pull way more then the advertised 75/150W. I mean even my oc'ed 580 did manage to pull 22A from one single 8 pin connector, it got warm, yes, lol.
> 
> But if you think about it, why Nvidia introduced this "one" connector, it's designed for the enterprise market, and simply pushed over to the gaming part. They no longer have to make 2 different model(s) of cards for both enterprise and / or consumer. The cards that do not pass the enterprise quality are moved over to the gamer ones. Nothing really gets lost in these markets.



Electrically they are two major hazards when the cable harness this working at it limits. 
a)  severe voltage fluctuation which can drive the card to freeze at gaming.
b) Molex pins they can overheat and even get burned.  

PSU over current protection does not include molex pins sparkling, that is an instant extremely high current event. 
Any way, I am not planning to be a joy killer, all I am saying this is that extreme stress of electrical parts this is a bad idea.


----------



## londiste (Aug 26, 2020)

Chrispy_ said:


> I always like to refer back to *this video*, when BF5's raytracing was at its highest quality. DICE later improved performance by dialling the RTX quality back a bit, and the patched version was definitely worth the small fidelity loss for such a significant performance increase


While you are right about fidelity not being worth the performance hit in a multiplayer title, "at its highest quality" is rather misleading. It will be very difficult to see the differences between then and now, the optimizations were primarily technical, not giving back in image quality. By the way, in many if not most of these scenes do show clear artifacting in screenspace reflections.


M2B said:


> That's not traditional rasterization, that demo uses some form of Tracing for the global illumination system in fact.


Epic has been intentionally vague about whether raytracing was used. Lumen definitely does support raytracing and it is highly optimized in a way similar to CryEngine's Neon Noir demo - raytraced GI or AO that falls back to voxel-based solution as soon as it can. There have been reports that the demo was not using hardware acceleration for RT which is kind of strange considering PS5 is supposed to have that.
That was not the point of demo - enormous amounts of polygons and streaming them in real-time from fast storage was the point and showoff feature.


----------



## TheoneandonlyMrK (Aug 26, 2020)

kiriakost said:


> Electrically they are two major hazards when the cable harness this working at it limits.
> a)  severe voltage fluctuation which can drive the card to freeze at gaming.
> b) Molex pins they can overheat and even get burned.
> 
> ...


It's typically Nvidia bullshit, they heard 12V PSU were going to be a thing and decided to gerzump everyone's arses again by inventing it! Toot sweet.
Same as they invented raytracing after sometime after the first guy's did And after Microsoft announced DxR.


----------



## thesmokingman (Aug 26, 2020)

londiste said:


> Come again?
> 
> 
> 
> ...



I'm talking about the card not the plug.


----------



## Krzych (Aug 26, 2020)

KarymidoN said:


> Basically they're saying the *Max Power draw* from a card with 1x 12pin new connector is 375W? (2x 150W 8pin + 75W From PCIE Power) right?
> 
> lets see what the real power draw is (after reviews), i hope they just left a lot more capacity on the connector to be used.



The two 8-pin connections of 12-pin cable go into the PSU, this is different than 150W 8-pin you plug into the GPU. These are the slots that normally power your 2x8-pin cable, rated up to 300W. So theoretically 12-pin is up to 600W.

This doesn't necessarily need to be a hint at anything about Ampere's power draw, but it could mean that even Founders Edition is going to be able to pull over 375W. Most likely not with reference TDP, but after raising power target to the cap. Theoretically there would be no need for dual 8-pin if it was to be capped at 320W like 2080 Ti. Using two slots on the PSU instead of one is certainly some kind of compatibility concern, they wouldn't go for it if it wasn't needed. I wonder if there is going to be single 8-pin version for lower end cards like 3070, assuming they get 12-pin too.


----------



## Jinxed (Aug 26, 2020)

theoneandonlymrk said:


> Err it's real now, no it's alllllll fake, were quite far out from real and will need way more than Rtx DxR for that.
> 
> He probably based that on trying it because that's my opinion as an owner.


Like I said, nothing fake about it. Raytracing is in fact quite simple. The same logic, light/luminance equations and PBR materials apply for professional renderers and real-time raytracing in games. It's no coincidence that you can accelerate raytracing in professional renderers using Turing GPUs. You can see how the noisy ground truth output looks like in this video: 








It will still take a while to get photorealistic real-time output of course, as that may require an order of magnitude more samples (rays) per pixel. But there's nothing fake about it even now. That's just a lie from someone in denial of the technology.


----------



## M2B (Aug 26, 2020)

londiste said:


> Epic has been intentionally vague about whether raytracing was used. Lumen definitely does support raytracing and it is highly optimized in a way similar to CryEngine's Neon Noir demo - raytraced GI or AO that falls back to voxel-based solution as soon as it can. There have been reports that the demo was not using hardware acceleration for RT which is kind of strange considering PS5 is supposed to have that.
> That was not the point of demo - enormous amounts of polygons and streaming them in real-time from fast storage was the point and showoff feature.



I'm honestly not even sure if it's possible for the GI system in that demo (in its current form) to utilize the RT units to accelerate the process.
Apparently it's different to the triangle RT solution from Nvidia.


----------



## Jinxed (Aug 26, 2020)

M2B said:


> I'm honestly not even sure if it's possible for the GI system in that demo (in its current form) to utilize the RT units to accelerate the process.
> Apparently it's different to the triangle RT solution from Nvidia.


Actually it's just an extension of light probes. You can see the typical artifacts of light probes (changes in brightness of surfaces) when moving through the tunnel from the big cave. The only difference is that the shading on the triangles looks much more realistic. But that is due to their high-poly-count feature in lumen with the triangles being almost sub-pixel sized, currently supported only on the PS 5, because it has such a ridiculously fast custom-made SSD. The light is still incorrect. It's the illusion that is significantly better.


----------



## M2B (Aug 26, 2020)

Jinxed said:


> Like I said, nothing fake about it. Raytracing is in fact quite simple. The same logic, light/luminance equations and PBR materials apply for professional renderers and real-time raytracing in games. It's no coincidence that you can accelerate raytracing in professional renderers using Turing GPUs. You can see how the noisy ground truth output looks like in this video:
> 
> 
> 
> ...




Ray Traced Shadows, Ambient Occlusion and Global illumination don't need that many samples to look good because of their softer look and nature. 1 or 2 sample per pixel should do the job with good enough denoising. When it comes to reflections though story is a bit different and more samples are needed for a more convincing look.
Nvidia is apparently working on more efficient denoising methods which could potentially improve performance and even visuals.


----------



## kiriakost (Aug 26, 2020)

theoneandonlymrk said:


> It's typically Nvidia bullshit, they heard 12V PSU were going to be a thing and decided to gerzump everyone's arses again by inventing it! Toot sweet.
> Same as they invented raytracing after sometime after the first guy's did And after Microsoft announced DxR.



I will disagree, from the moment that NVIDIA suggests the use of double 8 Pin ( 6+2) wires adaptor,  the industry it is not pushed to follow their footsteps.
PSU development and manufacturing this is not happening in just few months.  

As gossip or speculation, I will say that at NVIDIA road-mad, the next GPU after this it will use less power than that.  
But this is material for a conversation  no sooner than May 2021.


----------



## Chrispy_ (Aug 26, 2020)

Jinxed said:


> And you are basing that on what? There's nothing fake about the current raytracing implementation. It is and always was about the resolution. Just like old gen graphics were starting at 320x240, going through 640x480 all the way up to the 4k we have now, raytracing is going through that same path. It's about how many rays per pixel you can cast. Essentially you get a low res, high noise picture, which is the basis for GI, reflections or shadows. There's nothing fake about it, you're just dealing with the lack of data and noise, just like the low resolutions in the old times of gaming. Newer gens of cards will have more power, will be able to cast more rays per pixel, improving the "resolution", the actual quality of the raytraced output. Raytracing can produce photorealistic output if you don't need real time output. That means you can cast hundreds of rays per pixel and wait for it to be computed. Metro Exodus was if I remember correctly 1 ray per pixel due to their checkerboarding approach. Denoising makes that into something useful. Even such a small sample rate is already noticeably better that traditional rasterization. Now imagine 4 rays per pixel. That's gonna be a massive improvement.


Basing that on the example I specifically singled out, because it lets you mess around with settings and turn off the fakery to see what's really going on under the hood.

Raytracing a scene fully on my 2060S at native resolution still takes 20 seconds to get a single, decent-quality frame, so there are two main tricks used to generate a frame in the fraction of a second required for a single convicing frame:


*Temporal denoiser + blur *
This is based on previous frame data, so with the textures turned off and the only image you're seeing is what's raytraced. Top image was taken within a few frames of me moving the camera, bottom image was the desired final result that took 3-5 seconds to 'fade' in as the temporal denoiser had more previous frames to work from. Since you are usually moving when you're actually playing a game, the typical image quality of the entire experience is this 'dark smear', laggy, splotchy mess that visibly runs at a fraction of your framerate. It's genuinely amazing how close to a useful image it's generating in under half a second, but we're still a couple of orders of magnitude too slow to replace baked shadowmaps for full GI.






*Resolution hacks and intelligent sampling zones to draw you eye to shiny things at the cost of detail accuracy (think of it as a crude VRS for DXR)*
Here's an image from the same room, zoomed a lot, and the part of the image I took it from for reference:
A - rendered at 1/4 resolution
B - tranparency, this is a reflection on water, old-school 1995 DirectX 3.0 dither hack rather than real transparency calculations
C - the actual resolution of traced rays - each bright dot in region C is a ray that has been traced in just 4-bit chroma and all the dark space is essentially guesswork/temporal patterns tiled and rotated based on the frequency of those ray hits. If you go and look at a poorly-lit corner of the room you can clearly see the repeated tiling of these 'best guess' dot patterns and they have nothing to do with the noisier, more random bright specs that are the individual  ray samples.






So, combine those two things together. Firstly we have very low ray density that is used as a basis for region definitions that can then be approximated per frame using a library of tile-based approximations that aren't real raytracing, just more fakery that's stamped out as a best guess based on the very low ray coverage for that geometry region. If I was going to pick a rough ballpark figure, I'd probably say that 3% of the frame data in that last image is raytraced samples and 97% of it is faked interpolation between regions and potato-stamped to fill in the gaps with an approximation. This works fine as long as you just want an approximation, because the human brain does great work in filling in the gaps, especially when it's all in motion. Anyway, once it's tile-stamped a best-guess frame together out of those few ray samples, each of those barely-raytraced frames are blurred together in a buffer over the course of several hundred frames. There will be visual artifacts like in my first point anywhere you have new data on screen, because temporal filtering of on-screen data only means that anything that has appeared from offscreen is a very low-resolution, mostly fake mess for the first few dozen frames.

Don't get me wrong, QuakeII RTX is a technological marvel - it's truly incredible how close to a realtime raytraced game we can get with so many hacks and fakery to spread that absolutely minimal, almost insignificant amount of true raytracing around. Focus on the bits that matter, do it at a fraction of the game resolution and only in areas that are visibly detailed. Blur the hell out of the rest using tens of previous frames and a library of pre-baked ray tiles to approximate a raytraced result until you have hundreds of frames of data to actually use for real result.

We're just not at a level where we can afford to do it at full resolution, for the whole screen at once, and for regions offscreen so that movement doesn't introduce weird visual artifacts. 10x faster than a 2080Ti might get us those constraints, and another couple of orders magnitude might allow us to bring the temporal filter down from a hundred frames for a useful image, to single digit numbers of frames. It's still not realtime, but if people can run games at 100fps, 25fps raytraced data with temporal interpolation should be very hard to notice.

So yeah, real raytracing is going to need 1000x more power than a 2080Ti, but even with what we have right now, it's enough to get the ball rolling if you don't look too closely and hide the raytracing between lots of shader-based lies too. Let's face it, shader based lies get us 90% there for almost free, and if the limited raytracing can get us 95% of the way there without hurting performance, people are going to be happy that there's a noticeable improvement without really caring about how it happened - they'll just see DXR on/off side by side and go "yeah, DXR looks nicer".


----------



## Jinxed (Aug 26, 2020)

Chrispy_ said:


> Firstly we have very low ray density that is used as a basis for region definitions that can then be approximated per frame using a library of tile-based approximations that aren't real raytracing, just more fakery that's stamped out as a best guess based on the very low ray coverage for that geometry region.



That is a complete lie. A "library of tile-based approximation" is completely made up. There are denoisers at work, which you are obviously unable or unwilling to comprehend. The noisy ground truth output you posted is exactly the noisy ground truth that you can see in this video: 








There is nothing fake about it. There is no tile-based whatever thing you made up used to process that. It uses denoisers. In fact in most games those denoisers are driven by the Turing tensor cores. Also what you're missing is the fact the the denoisers are temporal, taking advantage of data from multiple older frames to produce each new frame. And there is nothing fake or weird about VRS either. If you have a constrained budget, which rays per pixel is at the moment and will be for the foreseable future, you spend where it matters the most. So of course the areas where there are more noticable details get more rays per pixel. Why the hell not?

And worst of all for you, there is actually an introductory video by Nvidia themselves with a Bethesda dev going into detail about the Quake RTX: 








The dev even said in the video: "No tricks, this is actually real. We're not faking it."

Nice try, but you failed.


----------



## mouacyk (Aug 26, 2020)

Nice to know that a $1200 2080 Ti renders RTX at 320x200 up to 60fps, for true 1080p raster resolutions.  Intel already did this with Q2 in late 2000 but around 20fps at 480p native res.  Of course, they didn't have a hybrid rendering pipeline then, so no raster tricks to fill in the gaps.  That's what DXR is for, and NVidia exploited it quite well.


----------



## TheoneandonlyMrK (Aug 26, 2020)

Jinxed said:


> There is no tile-based whatever made up thing used to process that. It uses denoisers. In fact in most games those denoisers are driven by the Turing tensor cores. Also what you're missing is the fact the the denoisers are temporal, taking advantage of data from multiple older frames to produce each new frame


Tiles =older frames ? Err.

All rasterization and all raytraced graphics are fake by remit.


----------



## dragontamer5788 (Aug 26, 2020)

Jinxed said:


> It uses denoisers.



For those in the graphic arts community, there's a term called Unbiased Rendering. Why? Because even Raytracers are "fake" to some degree. Unbiased rendering is closest to a physical simulation by my understanding. However, biased-rendering (which includes many raytracing effects), are faster, and in many cases, converge faster as well. This leads to realistic-looking simulated drawings, but nothing like the reality of actually simulating 10,000+ unbiased rays per pixel.

Temporal denoising is solidly in the "biased rendering" camp, no matter how you look at it. There's no physical reality that says we should smear light particles backwards in time. Yes, the effect looks good on modern systems and its efficient to do, but there's no physical principle to temporal denoising. Light just doesn't "time travel" and average with future light photons that hit the same area.

---------

Ambient Occlusion is another funny biased-rendering technique. Its completely fake. Corners do *NOT* absorb light into invisible black holes. But we use AO techniques because it makes shadows look deeper and with more contrast, which aids the video game player significantly.


----------



## Jinxed (Aug 26, 2020)

theoneandonlymrk said:


> Tiles =older frames ? Err.
> 
> All rasterization and all raytraced graphics are fake by remit.


Yes, older frames. Because the rays are intentionally not cast every frame to the same position in the pixel. Have you ever heard about MSAA stochastic sampling? I guess not. If you ignore the still images, which Chrispy is trying to use in a fallacious way to convince people that don't know any better, and instead look at the noisy ground truth output in a video like the one I've been posting, you can see what's going on. While the pattern in one still frame looks like checker board, it the next frame it will be offset a bit, in simplified terms, to sample data from the areas that were not samples in the previous frame. You can send rays to different points in the area represented by a pixel to get a better information of how the pixel looks like. That is the "samples/rays per pixel" we are talking about. But if you have the motion vectors for the scene, along with the raytracing samples from previous frames, you can also you use those. The downside is that the result may look a bit mopre blurry if you move the camera around very fast, since there may not be data for the temporal denoiser to work with. Lucklily this is not such a problem because how humans perceive motion. And in fact many game engines were taking advantage of this for decades - rendering scenes or parts of scenes in lower resolution when you move the camera around.


----------



## M2B (Aug 26, 2020)

What the hell does "Fake Ray Tracing" even mean lol. Everybody knows if you want to do Real-Time RT you have to sacrifice the Ray-Count and rely on denoising to fill the damn scene. There is no such a thing as 'Fake Ray Tracing".
Hundreds or even thousands of rays/px are needed to do Real Time RT without the need of denoising which is practically impossible to achieve.


----------



## Jinxed (Aug 26, 2020)

dragontamer5788 said:


> For those in the graphic arts community, there's a term called Unbiased Rendering. Why? Because even Raytracers are "fake" to some degree. Unbiased rendering is closest to a physical simulation by my understanding. However, biased-rendering (which includes many raytracing effects), are faster, and in many cases, converge faster as well. This leads to realistic-looking simulated drawings, but nothing like the reality of actually simulating 10,000+ unbiased rays per pixel.
> 
> Temporal denoising is solidly in the "biased rendering" camp, no matter how you look at it. There's no physical reality that says we should smear light particles backwards in time. Yes, the effect looks good on modern systems and its efficient to do, but there's no physical principle to temporal denoising. Light just doesn't "time travel" and average with future light photons that hit the same area.
> 
> ...


Of course there is physical basis for temporal denoising. But not where you are looking for it. It's on the observer side - the human eye. We are doing temporal denoising all the time. Lighbulbs are actually pulsing - depending on your electricity network frequency, which is different in defferent countries. In Europe it is 50/60 Hz. It is blinking so fast, that the eye averages the blinks and percieves it as a constant light source. The same goes for computer screens - old CRTs and even new LCD/IPS/ whatever screens. The individual pixels are either blinking or traversing from one color to another. That is the pixel response time everyone is talking about. Our eyes are averaging that as well. And it has many side effects.

Ambient Occlusion is not a ray tracing technique. That is a classic rasterization thing. Raytraced ambient occlusion, which is in fact the global illumination everyone talks about, replaces it with actual real results. You can see the difference in this video at 2:20:


----------



## Caring1 (Aug 26, 2020)

chodaboy19 said:


> No, Nvidia hasn't released any power consumption data. These numbers are just what people are guessing.
> 
> But it's assumed the power consumption can reach: (150W x 2 ) + 75W = 375W


And in my opinion the power consumption will be closer to two 6 pin connectors plus PCI-e slot.



KarymidoN said:


> mb i made a typo. 300W Connector + 75W from PCIE, i don't understand Why the box with the adaptor that seasonic was shipping said "850W PSU recommended" that led me to believe this cards would be more power hungry, most 650W Gold level PSU's will do just fine if you're not overclocking this cards then. why 850W recomendation from seasonic?


It's not the Watts it's the Amps that require the bigger capacity PSU.


----------



## dragontamer5788 (Aug 26, 2020)

Jinxed said:


> Ambient Occlusion is not a ray tracing technique. That is a classic rasterization thing. Raytraced ambient occlusion, which is in fact the global illumination everyone talks about, replaces it with actual real results. You can see the difference in this video at 2:20:



You clearly don't understand Raytraced AO.

Lets look at an actual picture of an actual corner of a room. (Particularly, this blogpost: https://www.nothings.org/gamedev/ssao/).





Literally, a picture of the upper corner of some dude's house. This is a real photograph.

Now lets look at AO at 2:20. Not the 2d Screen-space AO image, but the NVidia "Raytraced AO" image:






AO is an approximation, something that works pretty good in a lot of cases, but kind of fails if you know how its "fakery". However, regardless of how "fake" AO is, it looks cinematic and "cool". People like seeing corners with higher levels of contrast.

AO exaggerates the shadows of corners. Sometimes its correct: some corners in reality are very similar to AO corners. Take this corner from the photograph:





This corner is what AO is trying to replicate. However, corners don't *always* look like this in reality.

EDIT: Besides, this corner is cooler and more interesting to look at. So lets make all video game corners look like this, even if its not entirely reality. Making things look cool is almost the point of video games anyway.


----------



## Jinxed (Aug 26, 2020)

dragontamer5788 said:


> View attachment 166793
> 
> AO is an approximation, something that works pretty good in a lot of cases, but kind of fails if you know how its "fakery". However, regardless of how "fake" AO is, it looks cinematic and "cool". People like seeing corners with higher levels of contrast.



This actually shows that is it you who does not understand how global ilumination works. The amount and location of light and shadow depends also on the materials. You cannot compare the reflection of a corner in some random dude's house and the one in the Nvidia demo, because you have no way of knowing if the materials are even remotely similar, with similar luminance etc. Take it to the extreme and imagine a corner of a room made completely from mirrors. Would that look anything like the random dude's corner? No.

The images in that demo can only be compare between themselves - the Screen Space Ambient Occlusion (SSAO, rasterization) to the raytraced ambient occlusion - because they are based on the same materials.

Also the fact that classic AO sometimes looks right is the same thing - materials. For some materials, it may actually be almost correct.


----------



## TheoneandonlyMrK (Aug 26, 2020)

Lol


Jinxed said:


> Yes, older frames. Because the rays are intentionally not cast every frame to the same position in the pixel. Have you ever heard about MSAA stochastic sampling? I guess not. If you ignore the still images, which Chrispy is trying to use in a fallacious way to convince people that don't know any better, and instead look at the noisy ground truth output in a video like the one I've been posting, you can see what's going on. While the pattern in one still frame looks like checker board, it the next frame it will be offset a bit, in simplified terms, to sample data from the areas that were not samples in the previous frame. You can send rays to different points in the area represented by a pixel to get a better information of how the pixel looks like. That is the "samples/rays per pixel" we are talking about. But if you have the motion vectors for the scene, along with the raytracing samples from previous frames, you can also you use those. The downside is that the result may look a bit mopre blurry if you move the camera around very fast, since there may not be data for the temporal denoiser to work with. Lucklily this is not such a problem because how humans perceive motion. And in fact many game engines were taking advantage of this for decades - rendering scenes or parts of scenes in lower resolution when you move the camera around.


You realise to gain that long term badge I have happily haunted every bit of tech news here, anywhere else , and some genuine hands on why the. F##£ not actually doing, and with tech, even though I efffin hate Nvidia's marketing tactics and company buying too, I still own an Rtx card too, gits..

I saw all of that already I assure you.

I had a gaming pc with six GPU in once, just cos(Batman eek).


----------



## Jinxed (Aug 26, 2020)

theoneandonlymrk said:


> Lol
> 
> You realise to gain that long term badge I have happily haunted every bit of tech news here, anywhere else , and some genuine hands on why the. F##£ not actually doing, and with tech, even though I efffin hate Nvidia's marketing tactics and company buying too, I still own an Rtx card too, gits..
> 
> ...


It does not seem so from your posts.


----------



## TheoneandonlyMrK (Aug 26, 2020)

Jinxed said:


> It does not seem so from your posts.


Straw's being clutched, meeeoow.

It's late , you're lucky.

.soo in short were all getting on board with GPU developer's deciding ,via ai Supersampling Rtx etc what the game developers actually wanted to show you?.

I'll try it but probably only like it online competitive.


----------



## Jinxed (Aug 26, 2020)

theoneandonlymrk said:


> Straw's being clutched, meeeoow.
> 
> It's late , you're lucky.


So are you saying quantity > quality? Like the amount of posts you make is actually more important that WHAT'S IN THOSE POSTS? Cute. FYI I've been in tech for a very long time. But this is internet. Anyone can say anything, be it the truth or completely made up. Believe me at your own peril. For the same reason I do not believe you, as the quality of your posts does not support your claims.


----------



## dragontamer5788 (Aug 26, 2020)

Jinxed said:


> This actually shows that is it you who does not understand how global ilumination works. The amount and location of light and shadow depends also on the materials. You cannot compare the reflection of a corner in some random dude's house and the one in the Nvidia demo, because you have no way of knowing if the materials are even remotely similar, with similar luminance etc.



Look, I know you're getting egged on by some other users right now. So I'll try to cut you some slack here. 

Let me just give you a few links on this issue:

* https://docs.blender.org/manual/en/2.79/render/blender_render/world/ambient_occlusion.html



> Ambient Occlusion is a sophisticated ray-tracing calculation which simulates soft global illumination shadows by faking darkness perceived in corners and at mesh intersections, creases, and cracks, where ambient light is occluded, or blocked.
> 
> There is no such thing as AO in the real world; AO is a specific not-physically-accurate (but generally nice-looking) rendering trick. It basically samples a hemisphere around each point on the face, sees what proportion of that hemisphere is occluded by other geometry, and shades the pixel accordingly.



* https://rmanwiki.pixar.com/display/REN/PxrOcclusion



> _PxrOcclusion_ is a non-photorealistic integrator that can be used to render ambient occlusion, among other effects.



* https://docs.arnoldrenderer.com/display/A5AFMUG/Ambient+Occlusion



> Ambient occlusion is an approximation of global illumination that emulates the complex interactions between the diffuse inter-reflections of objects. While not physically accurate (for that use full global illumination), this shader is fast and produces a realistic effect.



All three 3d programs above are *Raytracers*, implementing raytraced ambient occlusion. All three claim that the effect is "fake" to some degree. No one who knows what they're talking about would ever claim that ambient occlusion is physically accurate.


----------



## TheoneandonlyMrK (Aug 26, 2020)

Jinxed said:


> So are you saying quantity > quality? Like the amount of posts you make is actually more important that WHAT'S IN THOSE POSTS? Cute. FYI I've been in tech for a very long time. But this is internet. Anyone can say anything, be it the truth or completely made up. Believe me at your own peril. For the same reason I do not believe you, as the quality of your posts does not support your claims.


I pointed out that your talking about roughly the same process with different terminology, perspectives differ and your just wrong.

The human consciousness makes up 68% of what you see while you're eyes blink around like mad focusing on the next most important thing , scanned subconsciously by your peripheral senses...

So what you see is mostly what you want to.

And all methods thus far devised are fake representations of real world's, none exclusively.


----------



## Jinxed (Aug 26, 2020)

dragontamer5788 said:


> Look, I know you're getting egged on by some other users right now. So I'll try to cut you some slack here.
> 
> Let me just give you a few links on this issue:
> 
> ...


We are talking about games here. Ambient occlusion in games is based on rasterization - in facts the depth informarion and normals of the surfaces being occluded.



dragontamer5788 said:


> Let me just give you a few links on this issue:  ...



From your own source:
"Ambient occlusion is an approximation of global illumination that emulates the complex interactions between the diffuse inter-reflections of objects. While not physically accurate (for that use full global illumination), this shader is fast and produces a realistic effect."

That is exactly what I was talking about. And that is why Nvidia is talking about global illumination all the time. You can get raytraced ambient occlusion (as in "the effect you used to get via SSAO", not the actual technique) using pure raytraced output. I still don't understand why you even brough Ambient Occlusion into this discussion. Nobody suggests that approach in raytraced games.


----------



## dragontamer5788 (Aug 26, 2020)

Jinxed said:


> I still don't understand why you even brough Ambient Occlusion into this discussion.



You're welcome to review the post where I first brought up AO. The one you responded to here just a few hours ago:









						NVIDIA Shares Details About Ampere Founders Edition Cooling & Power Design - 12-pin Confirmed
					

The PCI-E specification are so super safe, that you can push alot more through then intended. A capable PSU, connectors, wires and video cards can pull way more then the advertised 75/150W. I mean even my oc'ed 580 did manage to pull 22A from one single 8 pin connector, it got warm, yes, lol...




					www.techpowerup.com
				






> Ambient Occlusion is another funny biased-rendering technique. Its completely fake. Corners do *NOT* absorb light into invisible black holes. But we use AO techniques because it makes shadows look deeper and with more contrast, which aids the video game player significantly.



------

My point is that Chrispy_'s discussion point on "fake" raytracing techniques is accurate. There's a lot of fakery going on in today's video games (and even movies). The fakery gets better every year, but if you train your eye to see how these 3d simulations are "fake", it becomes pretty easy to pick out the inaccuracies. Yes, even against raytracers (even movie-class raytracers like Arnold). AO happens to be one fake technique that I'm able to personally pick up on somewhat easily. Yes, even from the NVidia demo you've linked last page.

Raytracing can be fake (see Raytraced AO as a perfect example). Don't assume something is physically accurate just because some marketing material in a slick youtube video tells you so.


----------



## Chrispy_ (Aug 26, 2020)

Jinxed said:


> That is a complete lie. A "library of tile-based approximation" is completely made up. There are denoisers at work, which you are obviously unable or unwilling to comprehend. The noisy ground truth output you posted is exactly the noisy ground truth that you can see in this video:
> 
> 
> 
> ...


So this is photo mode (static scene, static lighting, 8000 samples) run through an equalised histogram to expand the contrast range and draw out the tiling patterns that I mentioned were clearly visible in dark scenes. This is a single image but the patterns are way more obvious in motion because you can control their angle by moving the camera and your persistence of vision adds a level of temporal blur that helps pick these repeating patterns out of the truly random ray samples that manifest (without denoiser) as static noise like an old untuned TV set.

Important notes:

*No denoise* filter
*No temporal* AA
The highlighted surface of the railgun is textureless and smooth. It *should not have patterns* on it, especially not after 8000 samples.






These are repeating chunks of ordered noise, tiles - as I have called them - that *cannot *be true raytracing. It doesn't match gaussian/quantization noise that you'd get from a non-infinite number of rays and it has no place being there.

Nvidia may claim that they didn't cheat, but that's not a quote you can take out of context and apply to everything about raytracing - it's specific to the context of how they did the lighting - with raytracing methods rather than pre-baked static lightmaps or screen-space occlusion via shaders.

The way they did the raytracing is overflowingly full of cheats and shortcuts because _they have to_. We need 2-3 orders of magnitude more compute power to achieve cheat-free realtime raytracing at current resolutions. If you can't accept that then I don't know what else to say or how to explain it.


----------



## Jinxed (Aug 26, 2020)

dragontamer5788 said:


> You're welcome to review the post where I first brought up AO. The one you responded to here just a few hours ago:


So basically you brought in ambient occlusion into the discussion, even though noone mentioned it before and noone suggests actually using it in raytraced games, because you read somewhere it's fake and it was in some professional renderers that also do raytracing, even though they clearly state themselves it's fake and if you want the real thing use global illumination. Is that it? Yes, ambient occlusion is fake. I said that myself. That's why Nvidia demonstrates how to get rid of it with global illumination. Are we done here?



dragontamer5788 said:


> My point is that Chrispy_'s discussion point on "fake" raytracing techniques is accurate.



No, it is not. And yes, there is a lot of fakery in todays games. That's why we're replacing it with physically based rendering - raytracing - to get rid of the fake. A person that tried to compare screenshots of reflections of two different materials is not going to convince me otherwise.



Chrispy_ said:


> So this is photo mode (static scene, static lighting, 8000 samples) run through an equalised histogram to expand the contrast range and draw out the tiling patterns that I mentioned were clearly visible in dark scenes. This is a single image but the patterns are way more obvious in motion because you can control their angle by moving the camera and your persistence of vision adds a level of temporal blur that helps pick these repeating patterns out of the truly random ray samples that manifest (without denoiser) as static noise like an old untuned TV set.
> 
> Important notes:
> 
> ...


Those are no patterns. You are trying to conjure things where there are none.


----------



## dragontamer5788 (Aug 26, 2020)

Jinxed said:


> noone suggests actually using it in raytraced games



Really? No one? Not a single company you can think of that's pushing for Raytraced Ambient Occlusion?

* NVidia:  







* Unity: https://docs.unity3d.com/Packages/c...@7.1/manual/Ray-Traced-Ambient-Occlusion.html
* Unreal: https://docs.unrealengine.com/en-US/Engine/Rendering/RayTracing/index.html

------

I'm not even "against" AO. It looks cool. It improves contrast, it helps video game characters stick out. But its a fake effect for sure. And its no surprise: true unbiased global illumination is far beyond the capabilities of modern computers. I'm not even talking about "Realtime", I'm talking about movies who spend 8+ hours per frame on supercomputer clusters. Ground-truth global illumination is simply too expensive to actually calculate.

So AO, a "cheaty fake" shadow system, will remain. Its the best we got for our current level of computers.


----------



## Chrispy_ (Aug 26, 2020)

Jinxed said:


> Those are no patterns. You are trying to conjure things where there are none.


The denial is strong here. I can turn up the contrast more just by repeating the process in a darker area for even more obvious contrast but if you can't see those patterns then it's time to get your eyes tested. Hell, fire up QuakeII for yourself and disable all the filters to see them in motion for yourself, motion makes it 10x more obvious.

Or, continue the denial; I don't gain anything from your acceptance and it doesn't reflect on me.


----------



## Jinxed (Aug 26, 2020)

dragontamer5788 said:


> Really? No one? Not a single company you can think of that's pushing for Raytraced Ambient Occlusion?


OMG. I said it before it seems I need to repeat it - they are refering to raytraced ambient occlusion because the whole gaming industry was refering to the "darkened corners effect" as ambient occlusion.

ON THE FIRST SLIDE of the Nvidia video it says:
"Physically correct ambient occlusion"

Which should have hinted you why they are using that term. They are replacing ambient occlusion with a physically correct effect. It's even under the video in the description: "SSAO (Screen Space Ambient Occlusion) is a popular but limited process being used in contemporary games. Ray tracing provides better results."

I feel like I'm talking to a wall. Please actually watch the video. They are not suggesting anything like the sphere-based approximations some of the renderers you mentioned are using. They are simply casting rays. What they are talking about in the slides is how the DENOISER was modified to work well, saying that some areas need more samples per pixel to produce correct results when using raytracing. Nothing else. Nothing faked.

Please next time at least watch the video before you post it.



Chrispy_ said:


> The denial is strong here. I can turn up the contrast more just by repeating the process in a darker area for even more obvious contrast but if you can't see those patterns then it's time to get your eyes tested. Hell, fire up QuakeII for yourself and disable all the filters to see them in motion for yourself, motion makes it 10x more obvious.
> 
> Or, continue the denial; I don't gain anything from your acceptance and it doesn't reflect on me.


The imagination is strong on your part.






						Noise-induced order - Wikipedia
					






					en.wikipedia.org
				




I hope we are done here.


----------



## InVasMani (Aug 26, 2020)

The thing to do would be to use two blower fans and have some cutout inlets in the middle. The inlets could either be to simply draw air thru them and use a blower on the bottom rear of the PCB to push it toward the middle and another fan on the top front of the card that expels all the heat it draws in out the back of the case. The other option is two blower on the rear of the PCB and some inlets that are kind of crisscrossed women between some cutoff holes in the PCB for some heat pipes to fit thru them. The big things are the two blower fans could defiantly expel heat outside the case more quickly and by utilizing the top and bottom of the PCB itself you've got more area for heat-sink cooling. This isn't the first time I've mentioned the concept of utilizing a pairing of blower fans and active cooling on both the top and bottom of a GPU in a 3 slot cooler design. Something I hadn't thought of in the past would be the addition of some cutoff out inlets to weave some heat pipes thru them from the bottom to the top of the PCB which I think would be great or even as air holes to draw the hot air up and out rather than being trapped against the PCB under load and heating it up in the process which is far from ideal.


----------



## dicktracy (Aug 27, 2020)

This is the equivalent of a supercar.


----------



## ppn (Aug 27, 2020)

Nothing beats the non reference cooler deshroud plus custom 100mm or bigger fans that cover the shape of the heatsink perfectly. 20C lower at 20dB less. But those geniuses are far from perfecting this thing give or take another what 10 years and they may eventually get there. have to go through all the possible sketchy desings first.


----------



## medi01 (Aug 27, 2020)

M2B said:


> That's not traditional rasterization, that demo uses some form of Tracing for the global illumination system in fact.



Epic was asked "so, were you using RT"? And the answer is "nope".
Let's talk about "traditional" and "non-traditional" rasterizations, shall we.


----------



## DuxCro (Aug 27, 2020)

When you consider that XBOX series X RDNA 2 GPU has 12 TFLOPS of performance (on pair with RTX 2080 Super), that high end RDNA 2 graphics card will surely be faster than 2080 Ti. Probably on pair with 3080. So Nvidia had to create this 3090 monstrosity to make sure it remains the leader with fastest discrete GPU.


----------



## laszlo (Aug 27, 2020)

Krzych said:


> The two 8-pin connections of 12-pin cable go into the PSU, this is different than 150W 8-pin you plug into the GPU. These are the slots that normally power your 2x8-pin cable, rated up to 300W. So theoretically 12-pin is up to 600W.
> 
> This doesn't necessarily need to be a hint at anything about Ampere's power draw, but it could mean that even Founders Edition is going to be able to pull over 375W. Most likely not with reference TDP, but after raising power target to the cap. Theoretically there would be no need for dual 8-pin if it was to be capped at 320W like 2080 Ti. Using two slots on the PSU instead of one is certainly some kind of compatibility concern, they wouldn't go for it if it wasn't needed. I wonder if there is going to be single 8-pin version for lower end cards like 3070, assuming they get 12-pin too.



wrong; the two 8 pin connector of 12 pin cable (provided by psu producer ) is using the same slots in psu as you use  for current 8 pin ones, one connector /cable; so is same wattage/connector ; each psu producer will have to make these new cables available, as connector/slot type at psu vary by brand...

in addition , for those who don't have modular psu's, but have the two 8 pin pcie cables, they'll be adapters - two female 8 pin to male 12 pin; this adapter can be used also at modular psu by connecting the existing pcie cables so you won't have to buy the above mentioned special cable which may be expensive compared to adapter btw...


----------



## Chrispy_ (Aug 27, 2020)

ppn said:


> Nothing beats the non reference cooler deshroud plus custom 100mm or bigger fans that cover the shape of the heatsink perfectly. 20C lower at 20dB less. But those geniuses are far from perfecting this thing give or take another what 10 years and they may eventually get there. have to go through all the possible sketchy desings first.


AMD and Nvidia are unwilling to design a card that looks industrial, and - much like motherboard manufacturers - are obsessed with functional compromises for the sake of aesthetic design cues. Most of the time the aesthetic choices are tons of decorative plastic, lighting strips, and airflow obstructions that exist only to hold brand logo or name plate.

The obvious solution is much like you say - a full-coverage vapor-chamber to deal with the GPU, VRAM, and VRMs all connected to a 280mm heatsink that gets optimum cooling from a pair of regular 140mm fans and are controlled via a PWM header on the GPU board. Third party solutions exist for huge aftermarket air-coolers but they are all poor compromises designed to fit a wide range of cards and can never be as specific for one card as the cooling solution an OEM designed for that one exact board layout.


----------



## Initialised (Aug 27, 2020)

iO said:


> It's right there at ~2:25. Also no crazy airflow apocalypse in the case as some suggested. Back fan is in pull config. Edit: Or they revised the design and put the second fan on the front like a sane person would do...
> View attachment 166751



Look at all that innovation:

My old XFX Fury (2015???) had a triple fan cooler with a short PCB like that so the third fan blew upwards, it resulted in lower CPU temperatures under combined load vs just CPU load in some situation.









I suspect a similar approach will be taken on may AIBs, does this mean the 3090 is going with HMB, surely you can't squash 20+ GB of GDDR6 into a short PCB like that?

Got a triple slot cooler on your new top end card? Welcome to 2008, you're going to love it!





						Gainward HD 4870 X2 Golden Sample With a Three Slot Cooler | Geeks3D
					

Gainward HD 4870 X2 Golden Sample With a Three Slot Cooler.




					www.geeks3d.com
				




Got a vapour chamber on your GPU? Welcome to 2006, you're going to love it.





						SAPPHIRE Technology Limited
					

SAPPHIRE Technology continues to be a world leading manufacturer and global supplier of innovative graphics and mainboard products, delivering its AMD Radeon based products to the PC markets addressing gaming, eSports and performance graphics enthusiasts as well as delivering an array of...




					www.sapphirenation.net


----------



## Chrispy_ (Aug 27, 2020)

Initialised said:


> Look at all that innovation:
> 
> My old XFX Fury (2015???) had a triple fan cooler with a short PCB like that so the third fan blew upwards, it resulted in lower CPU temperatures under combined load vs just CPU load in some situation.
> 
> ...


Nvidia's just taking the 'Apple stance' of re-using an existing idea and claiming they thought of the innovation themselves, and then charging extra for the privilege.
Fanboys will take them at their word because they're in love, or something like that.


----------



## Jinxed (Aug 27, 2020)

Chrispy_ said:


> Nvidia's just taking the 'Apple stance' of re-using an existing idea and claiming they thought of the innovation themselves, and then charging extra for the privilege.
> Fanboys will take them at their word because they're in love, or something like that.


Nowhere in the video are they claiming anything like that. In a typical AMD fanboy fashion you are making things up, just like in our previous discussion. The only part they mention something that was not done before is related to the springs and backplate attachment, not the actual airflow, cut out board or anything from what you mentioned. I urge whoever is reading this to actually watch the video.


----------



## dragontamer5788 (Aug 27, 2020)

Jinxed said:


> In a typical AMD fanboy fashion you are making things up



The dude literally was posting screenshots of a 2060 doing Raytracing in Quake in the last page. He has a freaking NVidia GPU. He's willing to spend large amounts of time figuring out how it works.









						NVIDIA Shares Details About Ampere Founders Edition Cooling & Power Design - 12-pin Confirmed
					

The PCI-E specification are so super safe, that you can push alot more through then intended. A capable PSU, connectors, wires and video cards can pull way more then the advertised 75/150W. I mean even my oc'ed 580 did manage to pull 22A from one single 8 pin connector, it got warm, yes, lol...




					www.techpowerup.com
				





> Raytracing a scene fully on my 2060S at native resolution still takes 20 seconds to get a single, decent-quality frame


----------



## Krzych (Aug 27, 2020)

laszlo said:


> wrong; the two 8 pin connector of 12 pin cable (provided by psu producer ) is using the same slots in psu as you use  for current 8 pin ones, one connector /cable; so is same wattage/connector ; each psu producer will have to make these new cables available, as connector/slot type at psu vary by brand...
> 
> in addition , for those who don't have modular psu's, but have the two 8 pin pcie cables, they'll be adapters - two female 8 pin to male 12 pin; this adapter can be used also at modular psu by connecting the existing pcie cables so you won't have to buy the above mentioned special cable which may be expensive compared to adapter btw...



Your 2x8-pin cable connects to the PSU with one 8-pin connector, not two. If 12-pin cable uses two then it should be able to pull 600W, at least in theory. But if that's not the case then what is all the power draw crying about if the card won't be able to possibly pull more than your regular 2x8-pin one? 2080 Ti already pulls more than that after OC since it is not entirely satisfied with 380W power limit, and that's with 11GB of memory not 24. If 3090 can deliver 50% performance uplift at the same power and with 24GB of memory then it will be very efficient.


----------



## dragontamer5788 (Aug 27, 2020)

DuxCro said:


> When you consider that XBOX series X RDNA 2 GPU has 12 TFLOPS of performance (on pair with RTX 2080 Super), that high end RDNA 2 graphics card will surely be faster than 2080 Ti. Probably on pair with 3080. So Nvidia had to create this 3090 monstrosity to make sure it remains the leader with fastest discrete GPU.



I don't know if we should be comparing flops-for-flops across architectures. Case in point: Vega64 had 12 TFlops (single precision) of performance.

NVidia has always had fewer TFlops than AMD chips, and yet NVidia delivers high FPS when it actually comes to games. I'm sure a lot of it is the PTX compiler and/or other parts of the driver.

With that being said: RDNA has made advancements in efficiency. And XBox Series X / PS5 seem to have high-powered raytracing cores (Ray-box and Ray-triangle).


----------



## Chrispy_ (Aug 27, 2020)

Jinxed said:


> Nowhere in the video are they claiming anything like that. In a typical AMD fanboy fashion you are making things up, just like in our previous discussion. The only part they mention something that was not done before is related to the springs and backplate attachment, not the actual airflow, cut out board or anything from what you mentioned. I urge whoever is reading this to actually watch the video.


AMD fanboy. LOL, I think you've earned yourself an ignore. 
A discussion with you is much like arguing with a flat-earther; Facts are irrelevant and you're in denial of real evidence.


----------



## Jinxed (Aug 27, 2020)

Chrispy_ said:


> AMD fanboy. LOL, I think you've earned yourself an ignore.
> A discussion with you is much like arguing with a flat-earther; Facts are irrelevant and you're in denial of real evidence.


So where in the video is Nvidia saying they invented all this? Could you give us a timestamp where we can see and hear that, since you're so full of relevant facts? 



dragontamer5788 said:


> The dude literally was posting screenshots of a 2060 doing Raytracing in Quake in the last page. He has a freaking NVidia GPU. He's willing to spend large amounts of time figuring out how it works.
> 
> 
> 
> ...


Yes of course he is making things up. FYI I do have Quake RTX, in fact the full game including all expansions, not just the demo and I played through the whole thing, inlcuding experimenting, on my RTX 2080 Ti. There is nothing like what he is claiming happening in the game.





He is actually actificially increasing any shimmer in the picture, when he "run through an equalised histogram to expand the contrast range and draw out the tiling patterns ". Only what he is trying to convince us are tiling patterns is nothing more than effects of multiple other phenomena, including but not limited to noise induced order and the fact that floating point numbers in computer software are in reality discrete, not continuous.

"One distinguishing feature that separates traditional computer science from scientific computing is its use of discrete mathematics (0s and 1s) instead of continuous mathematics and calculus. Transitioning from integers to real numbers is more than a cosmetic change. Digital computers cannot represent all real numbers exactly, so we face new challenges when designing computer algorithms for real numbers. Now, in addition to analyzing the running time and memory footprint, we must be concerned with the "correctness" of the resulting solutions. This challenging problem is further exacerbated since many important scientific algorithms make additional approximations to accommodate a discrete computer. Just as we discovered that some discrete algorithms are inherently too slow (polynomial vs. exponential), we will see that some floating point algorithms are too inaccurate (stable vs. unstable)."






						Floating Point
					






					introcs.cs.princeton.edu
				



I hope Princeton as a source is good enough. 

He expanded the contrast to the extreme, chasing ghosts. Nothing else.


----------



## dragontamer5788 (Aug 27, 2020)

Lets see. On the one hand, there's Chrispy_, a dude who has been discussing things reasonably with over at TechReport for over a decade and has proven himself to me (multiple times) to have a sharp mind.

On the other hand, there's Jinxed, with ~50 posts in history. And literally every single one of them is about Nvidia vs AMD bullshit I don't give a care about. Someone who gets flustered over the simple mention of ambient occlusion and... floating point numbers.

Clean up your posting history Jinxed. Start posting about other topics, and prove yourself to me if you expect me to take yourself seriously.


----------



## Jinxed (Aug 27, 2020)

dragontamer5788 said:


> Lets see. On the one hand, there's Chrispy_, a dude who has been discussing things reasonably with over at TechReport for over a decade and has proven himself to me (multiple times) to have a sharp mind.
> 
> On the other hand, there's Jinxed, with ~50 posts in history. And literally every single one of them is about Nvidia vs AMD bullshit I don't give a care about. Someone who gets flustered over the simple mention of ambient occlusion and... floating point numbers.
> 
> Clean up your posting history Jinxed. Start posting about other topics, and prove yourself to me if you expect me to take yourself seriously.


Says the guy who was trying to make a point comparing light propagation in corners of rooms with completely different materials on walls. Yeah right. I take you sooooo seriously after that.


----------



## dragontamer5788 (Aug 27, 2020)

Chrispy_ said:


> These are repeating chunks of ordered noise, tiles - as I have called them - that *cannot *be true raytracing. It doesn't match gaussian/quantization noise that you'd get from a non-infinite number of rays and it has no place being there.



I've actually looked at lots of RNG patterns before. A repeating pattern like that could be explained by a PRNG, like XORShift. Honestly, to me, it looks like its "well randomized", but with a relatively small tile.

Most GPUs these days are only fast with 32-bit numbers. I'm going to guess that they just have a small PRNG state, so the "tiles" will be small. They're using something higher quality than your typical Unix LCGRNG for sure (see here for LCGRNG patterns: http://www.reedbeta.com/blog/quick-and-easy-gpu-random-numbers-in-d3d11/).

32-bit cycles go fast (4-billion is pretty small, all else considered), and many PRNGs fail at actually randomizing their bit differences... so you'll get repeating patterns over the 32-bit cycles. Especially if you're seeing the same pattern over and over again? Seems like the seed has just "cycled over" like an odometer.

EDIT: 1920 x 1080 x 8000 samples == 16-Billion samples. Enough to overflow a 32-bit PRNG four times. Just for some napkin math (and each sample may use more than one RNG value in the raytracing). Throw down some low-quality lower bits, and a "pattern" could very well emerge.

Just my opinion on that matter though.


----------



## Jinxed (Aug 27, 2020)

dragontamer5788 said:


> I've actually looked at lots of RNG patterns before. A repeating pattern like that could be explained by a PRNG, like XORShift. Honestly, to me, it looks like its "well randomized", but with a relatively small tile.
> 
> Most GPUs these days are only fast with 32-bit numbers. I'm going to guess that they just have a small PRNG state, so the "tiles" will be small. They're using something higher quality than your typical Unix LCGRNG for sure (see here for LCGRNG patterns: http://www.reedbeta.com/blog/quick-and-easy-gpu-random-numbers-in-d3d11/).
> 
> ...


Patterns can even be intentional, like here, for example:
"Ray tracing has two quality settings: high and ultra. Ultra setting traces up to one ray per pixel, with all the denoising and accumulation running in full. The high setting traces up to 0.5 rays per pixel, essentially in a checkerboard pattern, and one of the denoising passes runs as checkerboard. We recommend high for the best balance between image quality and performance, but please note that we are still experimenting a lot, so this information is valid only at the time of writing."
Interview on Metro Exodus with the developer's CTO and rendering programmer








						Tech Interview: Metro Exodus, ray tracing and the 4A Engine's open world upgrades
					

Remember the days when key technological innovations in gaming debuted on PC? The rise of multi-platform development an…




					www.eurogamer.net
				




But the pattern is about where the rays are cast, not about replacing or faking something with some kind of artificial tiles that Crispy is making up. It's not some made up fake thing. They rays still go to those locations, do their thing as normal. They devs are just reducing the number of rays to improve performance.


----------



## BoboOOZ (Aug 27, 2020)

Jinxed said:


> Those are no patterns. You are trying to conjure things where there are none.


There's obvious weird banding in those noise patterns, you just seem to like living within the Matrix  .



Jinxed said:


> Patterns can even be intentional, like here, for example:


I'm confused now, I thought there were no patterns.


----------



## yotano211 (Aug 27, 2020)

QUANTUMPHYSICS said:


> I'm gonna sell my 2080Ti on Ebay and get as much as possible for it.
> Definitely going for the 3090, as my PSU can handle it.
> Gonna buy it on my card, get the Rewards Flyer points for it and then write the whole thing off as a business expense.


I do the same thing with a new or used laptop almost every year.


----------



## Chrispy_ (Aug 27, 2020)

dragontamer5788 said:


> I've actually looked at lots of RNG patterns before. A repeating pattern like that could be explained by a PRNG, like XORShift. Honestly, to me, it looks like its "well randomized", but with a relatively small tile.
> 
> Most GPUs these days are only fast with 32-bit numbers. I'm going to guess that they just have a small PRNG state, so the "tiles" will be small. They're using something higher quality than your typical Unix LCGRNG for sure (see here for LCGRNG patterns: http://www.reedbeta.com/blog/quick-and-easy-gpu-random-numbers-in-d3d11/).
> 
> ...


Yeah, those tiles as I called them could easily be pseudorandom repetition from an interpolation algorithm too. I only said tiles because they looked like tiles, but I'm just saying what they look like. There's all sorts of ways that repeating pattern could be generated and PRNG is as good a hypothesis as any, probably better than tiles because that wouldn't require any additional VRAM to store the tile library, it'd be generated rapidly on the fly.

I'm familiar with pure CPU and GPU raytracers from a career perspective. Whilst I don't actively model and render myself, I'm responsible for budgeting and buying hardware farms for the teams that do, and in 20 years of doing this I've never seen repeating patterns like this, so I'm reasonably certain they're not generated by true raycasts. The various videos by devs on how they implemented DXR are all pretty cool, but I suspect the real secret sauce of what generates these patterns is another Nvidia black box that contains proprietary methods that Nvidia don't want to divulge to the competition. Unless someone finds more official detail, we're left to speculate with the evidence we can gather from the end result.


----------



## Jinxed (Aug 27, 2020)

BoboOOZ said:


> There's obvious weird banding in those noise patterns, you just seem to like living within the Matrix  .


You are seeing things.



BoboOOZ said:


> I'm confused now, I thought there were no patterns.


The pictures taken are from Quake II RTX. The interview is about Metro Exodus. Two different games, two different implementations.



Chrispy_ said:


> I only said tiles because they looked like tiles, but I'm just saying what they look like. There's all sorts of ways that repeating pattern could be generated and PRNG is as good a hypothesis as any, probably better than tiles because that wouldn't require any additional VRAM to store the tile library, it'd be generated rapidly on the fly.


So now they are no longer tiles. Hmm. And it only took a few pages of debunking your "facts".


----------



## InVasMani (Aug 28, 2020)

Initialised said:


> Look at all that innovation:
> 
> My old XFX Fury (2015???) had a triple fan cooler with a short PCB like that so the third fan blew upwards, it resulted in lower CPU temperatures under combined load vs just CPU load in some situation.
> 
> ...


 That's kind of along the lines of why I think a section of the PCB being cutout as a air passage inlet/outlet between the top and bottom of the PCB with a twin blower setup would work well in principle just push pull push and the natural heat rising element will even do some of the work for you at the same time. The way it's been with GPU's for years is all the heat rises and any heat that doesn't get pushed out the rear of the case which even with a blower fan is probably relatively limited gets fairly stuck in place sure can kind of trickle out the sides, but I bet you a lot still remains overly stagnant and adversely impacts the PCB temps a fair bit. Really even if they had some holes like they use to mount M.2 drives like four to eight spaced out on the PCB to let some of heat escape upward it wouldn't hurt at all. They could do that and have two blowers on the top of the card with a heavily perforated hexagonal backplate to let air escape that pulls heat upward and pushes all that heat out the rear of the case perhaps with a pair of top down coolers down bottom. Basically four fans the card won't bend all the heat pretty much gets expelled out the rear of the case efficiently and could be run at low and quiet RPM's trivially they could even dynamically heat cycle between the top two fans and bottom fans.


----------



## laszlo (Aug 28, 2020)

Krzych said:


> Your 2x8-pin cable connects to the PSU with one 8-pin connector, not two.




nope two cables & two connectors one for each in psu...i don't know what psu you have but didn't saw yet the combination you mentioned...


----------



## Fry178 (Aug 28, 2020)

@DuxCro
last time i checked, "you" aim for 1st, not second place..
so you want Nv to do less, just so amd can have something close in performance?
what other product on this planet gets artificially castrated, so competition can catch up?
----



funny how many ppl here "know" with how good one brand will be, and the other will be doing so bad, on cards that havent seen light of day yet,
nor been reviewed.

please make a sign and put it on your fridge:
*it could/should/would..*
the good thing? it will work for ANY future release of any hw.


----------



## kayjay010101 (Aug 28, 2020)

laszlo said:


> nope two cables & two connectors one for each in psu...i don't know what psu you have but didn't saw yet the combination you mentioned...


I've seen both implementations. My prior EVGA PSU had a single 8-pin cable that then split into two 6+2 pins at the end. My current Corsair unit is 1 to 1 where the PSU end is 8pin and the other is 6+2pin. Both have their pros and cons, the pros of having the split is you only need to run one cable to the PSU to power a dual 8pin card, but the con is if you're using only one of them you've got a chonky 8pin dangling off to the side.


----------



## laszlo (Aug 28, 2020)

kayjay010101 said:


> I've seen both implementations. My prior EVGA PSU had a single 8-pin cable that then split into two 6+2 pins at the end. My current Corsair unit is 1 to 1 where the PSU end is 8pin and the other is 6+2pin. Both have their pros and cons, the pros of having the split is you only need to run one cable to the PSU to power a dual 8pin card, but the con is if you're using only one of them you've got a chonky 8pin dangling off to the side.


this could explain why he insisted on his variant which i never saw but it seems that depend on manufacturer than ; thx for confirming!


----------



## steen (Aug 28, 2020)

steen said:


> Doubtless a nice piece of design, but necessitated by high power consumption. *8* layer PCB, back drilled VIAs, high compoent density/quality -> high BOM. They're explicit in the video that the way to get higher performance is through higher power use. Will be telling for efficiency.


From page 1. Correction, 12 layer PCB as per design guide. How do you like BOM now?


----------



## DuxCro (Aug 28, 2020)

You know what frightens me the most? That those ridiculous prices are the new standard for graphics cards. Won't go down, and can only go up.


----------



## Buftor (Aug 28, 2020)

A company so rich cannot make a more polished video? This looks so amateurish, with meaningless graphics and horrible photography. There are million home produced channels on YouTube that have higher production values.


----------



## BoboOOZ (Aug 28, 2020)

DuxCro said:


> You know what frightens me the most? That those ridiculous prices are the new standard for graphics cards. Won't go down, and can only go up.


Well, if sales are going great (while prices are high), how can prices go down? All computer hardware sales in the last 6 months did awesomely. Prices might come down only when and if people buy less, but unfortunately that doesn't seem to be the trend.


----------



## Chrispy_ (Aug 28, 2020)

DuxCro said:


> You know what frightens me the most? That those ridiculous prices are the new standard for graphics cards. Won't go down, and can only go up.


if the consoles are priced at $500, the PC gaming market won't be able to maintain the ridiculous increasing Nvidia tax for too much longer. Despite the vocal DIY PC market, your average gamer is just going to be swayed towards one of the new consoles instead. The games are heavily optimised for the console hardware and controllers, they get earlier game releases and addtional exclusives and the online community for consoles is usually larger when it comes to matchmaking purposes and keeping servers alive.


----------



## medi01 (Aug 28, 2020)

Consoles going from 7850/7870 GPU to 2080/2080s levels, beating 95/98% of the PC market at GPU power and now targeting 4k resolutions will only make it worse for the PC market.


----------



## Totally (Aug 29, 2020)

Chomiq said:


> So a single 12-pin will pull power from a single 8-pin pci-e cable?



2 x 8-pin, same way when 8-pin connectors were new and weren't on psu's yet, people had to get by with 2 x 6-pin to 8-pin adapters.


----------

