• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

Radeon HD 6900 Series Officially Postponed to Mid-December

mdsx1950

New Member
Joined
Nov 21, 2009
Messages
2,064 (0.37/day)
Location
In a gaming world :D
System Name Knight-X
Processor Intel Core i7-980X [4.2Ghz]
Motherboard ASUS P6T7 WS
Cooling CORSAIR Hydro H70 + 2x CM 2000RPM 120mm LED Fans
Memory Corsair DOMINATOR-GT 12GB (6 x 2GB) DDR3 2000MHz
Video Card(s) AMD Radeon HD6970 2GB CrossFire [PhysX - EVGA GTX260(216SP) 896MB]
Storage 2x Corsair Perf. 512GB | OCZ Colossus 1TB | 2xOCZ Agilities 120GB SSDs
Display(s) Dell UltraSharp™ 3008WFP [2560x1600]
Case Cooler Master HAF-X + NZXT Temp LCD
Audio Device(s) ASUS Xonar D2X | ONKYO HTS9100THX 7.1
Power Supply Silverstone ST1500 1500W [80 PLUS Silver]
Software Windows 7 Ultimate X64 (Build 7600)
Benchmark Scores Heaven 2.0 with Extreme Tessellation at 1080p - 96FPS
DAMMIT!

I was waiting to get a 6900 card before December. WTF?
 
Joined
Feb 18, 2006
Messages
5,147 (0.75/day)
Location
AZ
System Name Thought I'd be done with this by now
Processor i7 11700k 8/16
Motherboard MSI Z590 Pro Wifi
Cooling Be Quiet Dark Rock Pro 4, 9x aigo AR12
Memory 32GB GSkill TridentZ Neo DDR4-4000 CL18-22-22-42
Video Card(s) MSI Ventus 2x Geforce RTX 3070
Storage 1TB MX300 M.2 OS + Games, + cloud mostly
Display(s) Samsung 40" 4k (TV)
Case Lian Li PC-011 Dynamic EVO Black
Audio Device(s) onboard HD -> Yamaha 5.1
Power Supply EVGA 850 GQ
Mouse Logitech wireless
Keyboard same
VR HMD nah
Software Windows 10
Benchmark Scores no one cares anymore lols
DAMMIT!

I was waiting to get a 6900 card before December. WTF?

yeah but you were going for the 6990 and that was already posponed until january.
 
Joined
Apr 21, 2010
Messages
5,731 (1.07/day)
Location
West Midlands. UK.
System Name Ryzen Reynolds
Processor Ryzen 1600 - 4.0Ghz 1.415v - SMT disabled
Motherboard mATX Asrock AB350m AM4
Cooling Raijintek Leto Pro
Memory Vulcan T-Force 16GB DDR4 3000 16.18.18 @3200Mhz 14.17.17
Video Card(s) Sapphire Nitro+ 4GB RX 580 - 1450/2000 BIOS mod 8-)
Storage Seagate B'cuda 1TB/Sandisk 128GB SSD
Display(s) Acer ED242QR 75hz Freesync
Case Corsair Carbide Series SPEC-01
Audio Device(s) Onboard
Power Supply Corsair VS 550w
Mouse Zalman ZM-M401R
Keyboard Razor Lycosa
Software Windows 10 x64
Benchmark Scores https://www.3dmark.com/spy/6220813

mdsx1950

New Member
Joined
Nov 21, 2009
Messages
2,064 (0.37/day)
Location
In a gaming world :D
System Name Knight-X
Processor Intel Core i7-980X [4.2Ghz]
Motherboard ASUS P6T7 WS
Cooling CORSAIR Hydro H70 + 2x CM 2000RPM 120mm LED Fans
Memory Corsair DOMINATOR-GT 12GB (6 x 2GB) DDR3 2000MHz
Video Card(s) AMD Radeon HD6970 2GB CrossFire [PhysX - EVGA GTX260(216SP) 896MB]
Storage 2x Corsair Perf. 512GB | OCZ Colossus 1TB | 2xOCZ Agilities 120GB SSDs
Display(s) Dell UltraSharp™ 3008WFP [2560x1600]
Case Cooler Master HAF-X + NZXT Temp LCD
Audio Device(s) ASUS Xonar D2X | ONKYO HTS9100THX 7.1
Power Supply Silverstone ST1500 1500W [80 PLUS Silver]
Software Windows 7 Ultimate X64 (Build 7600)
Benchmark Scores Heaven 2.0 with Extreme Tessellation at 1080p - 96FPS

btarunr

Editor & Senior Moderator
Staff member
Joined
Oct 9, 2007
Messages
47,291 (7.53/day)
Location
Hyderabad, India
System Name RBMK-1000
Processor AMD Ryzen 7 5700G
Motherboard ASUS ROG Strix B450-E Gaming
Cooling DeepCool Gammax L240 V2
Memory 2x 8GB G.Skill Sniper X
Video Card(s) Palit GeForce RTX 2080 SUPER GameRock
Storage Western Digital Black NVMe 512GB
Display(s) BenQ 1440p 60 Hz 27-inch
Case Corsair Carbide 100R
Audio Device(s) ASUS SupremeFX S1220A
Power Supply Cooler Master MWE Gold 650W
Mouse ASUS ROG Strix Impact
Keyboard Gamdias Hermes E2
Software Windows 11 Pro
At this point it's prolly too late to make any hardware changes, just in software (drivers) and you can only do so much with those.

You definitely can change the GPU specs (with increased clocks, if the cooler permits). NVIDIA did just that after it found out that its original GTX 480 was too slow after the Radeon HD 5000 series released, but had to also redesign the cooler to keep up with the increased clock speeds (which added to the development time).
 

Fourstaff

Moderator
Staff member
Joined
Nov 29, 2009
Messages
10,079 (1.83/day)
Location
Home
System Name Orange! // ItchyHands
Processor 3570K // 10400F
Motherboard ASRock z77 Extreme4 // TUF Gaming B460M-Plus
Cooling Stock // Stock
Memory 2x4Gb 1600Mhz CL9 Corsair XMS3 // 2x8Gb 3200 Mhz XPG D41
Video Card(s) Sapphire Nitro+ RX 570 // Asus TUF RTX 2070
Storage Samsung 840 250Gb // SX8200 480GB
Display(s) LG 22EA53VQ // Philips 275M QHD
Case NZXT Phantom 410 Black/Orange // Tecware Forge M
Power Supply Corsair CXM500w // CM MWE 600w

mdsx1950

New Member
Joined
Nov 21, 2009
Messages
2,064 (0.37/day)
Location
In a gaming world :D
System Name Knight-X
Processor Intel Core i7-980X [4.2Ghz]
Motherboard ASUS P6T7 WS
Cooling CORSAIR Hydro H70 + 2x CM 2000RPM 120mm LED Fans
Memory Corsair DOMINATOR-GT 12GB (6 x 2GB) DDR3 2000MHz
Video Card(s) AMD Radeon HD6970 2GB CrossFire [PhysX - EVGA GTX260(216SP) 896MB]
Storage 2x Corsair Perf. 512GB | OCZ Colossus 1TB | 2xOCZ Agilities 120GB SSDs
Display(s) Dell UltraSharp™ 3008WFP [2560x1600]
Case Cooler Master HAF-X + NZXT Temp LCD
Audio Device(s) ASUS Xonar D2X | ONKYO HTS9100THX 7.1
Power Supply Silverstone ST1500 1500W [80 PLUS Silver]
Software Windows 7 Ultimate X64 (Build 7600)
Benchmark Scores Heaven 2.0 with Extreme Tessellation at 1080p - 96FPS
W

wahdangun

Guest
You definitely can change the GPU specs (with increased clocks, if the cooler permits). NVIDIA did just that after it found out that its original GTX 480 was too slow after the Radeon HD 5000 series released, but had to also redesign the cooler to keep up with the increased clock speeds (which added to the development time).

so do you think that nvdia think more Mghz was more important than more SP?
 

CDdude55

Crazy 4 TPU!!!
Joined
Jul 12, 2007
Messages
8,178 (1.28/day)
Location
Virginia
System Name CDdude's Rig!
Processor AMD Athlon II X4 620
Motherboard Gigabyte GA-990FXA-UD3
Cooling Corsair H70
Memory 8GB Corsair Vengence @1600mhz
Video Card(s) XFX HD 6970 2GB
Storage OCZ Agility 3 60GB SSD/WD Velociraptor 300GB
Display(s) ASUS VH232H 23" 1920x1080
Case Cooler Master CM690 (w/ side window)
Audio Device(s) Onboard (It sounds fine)
Power Supply Corsair 850TX
Software Windows 7 Home Premium 64bit SP1

btarunr

Editor & Senior Moderator
Staff member
Joined
Oct 9, 2007
Messages
47,291 (7.53/day)
Location
Hyderabad, India
System Name RBMK-1000
Processor AMD Ryzen 7 5700G
Motherboard ASUS ROG Strix B450-E Gaming
Cooling DeepCool Gammax L240 V2
Memory 2x 8GB G.Skill Sniper X
Video Card(s) Palit GeForce RTX 2080 SUPER GameRock
Storage Western Digital Black NVMe 512GB
Display(s) BenQ 1440p 60 Hz 27-inch
Case Corsair Carbide 100R
Audio Device(s) ASUS SupremeFX S1220A
Power Supply Cooler Master MWE Gold 650W
Mouse ASUS ROG Strix Impact
Keyboard Gamdias Hermes E2
Software Windows 11 Pro
so do you think that nvdia think more Mghz was more important than more SP?

More MHz needs a lower power budget than More SPs. With GF110, (of which NV didn't release an architecture schematic till date), NV definitely put GF100 through a weight loss programme. It shed 200 million transistors, which gave NV the power budget to enable all 512 cores and also bump clock speeds.
 

Benetanegia

New Member
Joined
Sep 11, 2009
Messages
2,680 (0.48/day)
Location
Reaching your left retina.
More MHz needs a lower power budget than More SPs.

Not true at all. Every evidence points out to the opposite. OCed GTX470 consume almost as much as GTX480. GTX470 and GTX465 power is similar compared to the huge difference between GTX470 and GTX480, despite the fact that 480 vs 470 means a ~10% reduction in enabled parts vs a ~30% reduction in GTX465 vs GTX470.

There's many many other examples in the past and in current cards, but the most obvious one is HD5850 vs HD5830.

With GF110, (of which NV didn't release an architecture schematic till date), NV definitely put GF100 through a weight loss programme. It shed 200 million transistors, which gave NV the power budget to enable all 512 cores and also bump clock speeds.

I don't think it's the transistor reduction which made that posible, but rather the fact they didn't screw up what Nvidia calls the fabric this time around. The 200 million transistor reduction is impressive nnetheless considering that FP64 math and all the GPGPU goodness is still there, contrary to what was first rumored.
 

cadaveca

My name is Dave
Joined
Apr 10, 2006
Messages
17,232 (2.52/day)
I don't think it's the transistor reduction which made that posible, but rather the fact they didn't screw up what Nvidia calls the fabric this time around. The 200 million transistor reduction is impressive nnetheless considering that FP64 math and all the GPGPU goodness is still there, contrary to what was first rumored.

The "fabric" is cache, and is one of the hottest parts of any gpu/cpu. They lowered it's size/complexity, which allowed them to enable the extra shaders, without going over PCI-E power budgets.

That Unified Cache is one of Fermi's biggest selling points for GPGPU, and hence the "rumours" about the revision affecting GPGPU performance.
 
Joined
Apr 26, 2009
Messages
517 (0.09/day)
Location
You are here.
System Name Prometheus
Processor Intel i7 14700K
Motherboard ASUS ROG STRIX B760-I
Cooling Noctua NH-D12L
Memory Corsair 32GB DDR5-7200
Video Card(s) MSI RTX 4070Ti Ventus 3X OC 12GB
Storage WD Black SN850 1TB
Display(s) DELL U4320Q 4K
Case SSUPD Meshroom D Fossil Gray
Audio Device(s) ASUS SupremeFX S1220A
Power Supply Corsair SF750 Platinum SFX
Mouse Razer Orochi V2
Keyboard Nuphy Air75 V2 White
Software Windows 11 Pro x64
Not true at all. Every evidence points out to the opposite. OCed GTX470 consume almost as much as GTX480. GTX470 and GTX465 power is similar compared to the huge difference between GTX470 and GTX480, despite the fact that 480 vs 470 means a ~10% reduction in enabled parts vs a ~30% reduction in GTX465 vs GTX470.

You assume that those cards use the same voltage domains. Usually, in the "lesser" versions of the chip's implementations, the GTX465/GTX470, we have chips that could not achieve the clock domains of the GTX480 at the required core voltage. Also the chips had different levels of transistor current "leakage".

In time the 40nm TSMC node was stabilized, and we can see GTX470 cards that are overclocked and consume less power then stock GTX470's of the past. For example the Gigabyte GTX470 SOC.

To continue my "theory" that AMD has yield problems. Having good yields does not imply to only have "working" chips. The chips have to achieve a certain frequency at a certain core voltage, in order to achieve an established maximum board power. If they do not, then the cooling solution must be adjusted, the vBIOS has to be tweaked, better components have to be used in order to provide cleaner power and so on...

I do not buy that AMD was "surprised" about the GTX580 performance. AMD knows about nVidia cards long before even the rumors start, months before. nVidia knows what AMD is preparing for the future months before their release. Be sure of that.
 

Benetanegia

New Member
Joined
Sep 11, 2009
Messages
2,680 (0.48/day)
Location
Reaching your left retina.
The "fabric" is cache, and is one of the hottest parts of any gpu/cpu. They lowered it's size/complexity, which allowed them to enable the extra shaders, without going over PCI-E power budgets.

That Unified Cache is one of Fermi's biggest selling points for GPGPU, and hence the "rumours" about the revision affecting GPGPU performance.

I'm talking about what Nvidia/ Jen-Hsun called fabric in the video where they spoke about the problems. It's not cache. It's the interconnection layer, which does connect cache with the SPs and SPs with TMUs and whatnot, but that's about the only relation there is with cache.

On top of that I don't know where did you hear that cache is smaller/simpler? Do you have a link I could read? What I have read is quite the opposite, from TechReport:

The third change is pretty subtle. In the Fermi architecture, the shader multiprocessors (SMs) have 64KB of local data storage that can be partitioned either as 16KB of L1 cache and 48KB of shared memory or vice-versa. When the GF100 is in a graphics context, the SM storage is partitioned in a 16KB L1 cache/48KB shared memory configuration. The 48KB/16KB config is only available for GPU computing contexts. The GF110 is capable of running with a 48KB L1 cache/16KB shared memory split for graphics, which Nvidia says "helps certain types of shaders."

More cache capabilities indeed.

Also still has the same 768 KB of L2 cache. And ECC.

You assume that those cards use the same voltage domains. Usually, in the "lesser" versions of the chip's implementations, the GTX465/GTX470, we have chips that could not achieve the clock domains of the GTX480 at the required core voltage. Also the chips had different levels of transistor current "leakage".

In time the 40nm TSMC node was stabilized, and we can see GTX470 cards that are overclocked and consume less power then stock GTX470's of the past. For example the Gigabyte GTX470 SOC.

Still that doesn't change the fact that power consumption is more affected by clock speeds than enabled parts. It has always been like that and will never change. There's 20+ years of evidence I could bring in here, but I don't think I really need. Higher clock speeds also produce higher leakeage and well in general terms there's a very good reason why we are getting bigger and bigger CPU/GPU but never clocked much higher than 3 Ghz (CPU) and below 1 Ghz (GPU). Not exactly because they cannot go higher since they do go much higher with exotic cooling.
 
Last edited:
Joined
Apr 4, 2008
Messages
4,686 (0.77/day)
System Name Obelisc
Processor i7 3770k @ 4.8 GHz
Motherboard Asus P8Z77-V
Cooling H110
Memory 16GB(4x4) @ 2400 MHz 9-11-11-31
Video Card(s) GTX 780 Ti
Storage 850 EVO 1TB, 2x 5TB Toshiba
Case T81
Audio Device(s) X-Fi Titanium HD
Power Supply EVGA 850 T2 80+ TITANIUM
Software Win10 64bit
Joined
Jan 2, 2009
Messages
9,899 (1.70/day)
Location
Essex, England
System Name My pc
Processor Ryzen 5 3600
Motherboard Asus Rog b450-f
Cooling Cooler master 120mm aio
Memory 16gb ddr4 3200mhz
Video Card(s) MSI Ventus 3x 3070
Storage 2tb intel nvme and 2tb generic ssd
Display(s) Generic dell 1080p overclocked to 75hz
Case Phanteks enthoo
Power Supply 650w of borderline fire hazard
Mouse Some wierd Chinese vertical mouse
Keyboard Generic mechanical keyboard
Software Windows ten
Still that doesn't change the fact that power consumption is more affected by clock speeds than enabled parts. It has always been like that and will never change. There's 20+ years of evidence I could bring in here, but I don't think I really need. Higher clock speeds also produce higher leakeage and well in general terms there's a very good reason why we are getting bigger and bigger CPU/GPU but never clocked much higher than 3 Ghz (CPU) and below 1 Ghz (GPU). Not exactly because they cannot go higher since they do go much higher with exotic cooling.

Just to elaborate on this, 4ghz is the point processors can have problems due to physical limitations of the material used for transistors at the moment. This is why voltage requirements jump up dramatically at this point( typically anyway).
 
Joined
May 5, 2009
Messages
2,270 (0.40/day)
Location
the uk that's all you need to know ;)
System Name not very good (wants throwing out window most of time)
Processor xp3000@ 2.17ghz pile of sh** /i7 920 DO on air for now
Motherboard msi kt6 delta oap /gigabyte x58 ud7 (rev1.0)
Cooling 1 green akasa 8cm(rear) 1 multicoloured akasa(hd) 1 12 cm (intake) 1 9cm with circuit from old psu
Memory 1.25 gb kingston hyperx @333mhz/ 3gb corsair dominator xmp 1600mhz
Video Card(s) (agp) hd3850 not bad not really suitable for mobo n processor/ gb hd5870
Storage wd 320gb + samsung 320 gig + wd 1tb 6gb/s
Display(s) compaq mv720
Case thermaltake XaserIII skull / coolermaster cm 690II
Audio Device(s) onboard
Power Supply corsair hx 650 w which solved many problems (blew up) /850w corsair
Software xp pro sp3/ ? win 7 ultimate (32 bit)
Benchmark Scores 6543 3d mark05 ye ye not good but look at the processor /uknown as still not benched
meh i couldn't care less tbh, my 5870 does what i need it to,
now all i need is more horse power, might invest in a i7 980x for my emulatory purposes ;)
but might get something new graphics card wise next birthday,

hey this i7 920 and 5870 was my present to myself :D


still it's sad news for those who need their fix of tech, just tell the mrs it's a christmas present for yourself;)
 

cadaveca

My name is Dave
Joined
Apr 10, 2006
Messages
17,232 (2.52/day)
I'm talking about what Nvidia/ Jen-Hsun called fabric in the video where they spoke about the problems. It's not cache. It's the interconnection layer, which does connect cache with the SPs and SPs with TMUs and whatnot, but that's about the only relation there is with cache.

On top of that I don't know where did you hear that cache is smaller/simpler? Do you have a link I could read? What I have read is quite the opposite, from TechReport:



More cache capabilities indeed.

Also still has the same 768 KB of L2 cache. And ECC.

That's not more capabilities..they always existed. Check the Fermi whitepaper.

As to the cahce thing, I dunno. Maybe just rumour. Maybe the size change was what you mentioned, and the rest is left with cache complexity. Those 200mil transiastors went somewhere...
 

Benetanegia

New Member
Joined
Sep 11, 2009
Messages
2,680 (0.48/day)
Location
Reaching your left retina.
That's not more capabilities..they always existed. Check the Fermi whitepaper.

I know they were there, but only for Cuda, it's explained in the quote. Only a guess, but I suppose that enabling them for DX applications requires a small change to the ISA, but like I said just guessing there.

As to the cahce thing, I dunno. Maybe just rumour. Maybe the size change was what you mentioned, and the rest is left with cache complexity. Those 200mil transiastors went somewhere...

Maybe just unneeded or redundant transistors probably. afaik many transistors are only used for stability, voltage/current tweaking throughout the chip or to control the functionality of other transistors, since CMOS doesn't use resistors and every transistor is polarized by other transistors and the parasitic resistance of the sum of the circuit that it "sees".
 

cadaveca

My name is Dave
Joined
Apr 10, 2006
Messages
17,232 (2.52/day)
Sure, they could have also had some redundant parts in GF100 that they found they didn't need, and trimmed. Given the info out now, the cache thing made sense to me, so I've accepted it as fact.

CUDA isn't magically different than 3D(both are mathematical calculations using the exact same hardware)..to me, that's nV trying to ensure thier business customers buy the GPGPU products, and thier high prices, instead of normal Geforce cards. They simply didn't expose the functionality in driver. Like really, you're smart enough to not buy THAT hype...we both know that Tesla and Geforce products feature tha same silicon.


ANYWAY...


I'm happy ofr the delay. I've said all along, 6970 in January, and ^990 in March, so this news doesn't affect me one bit. It just exposes AMD's lack of honesty sometimes(Barts launch was when they said 6970 end of next week).
 

Benetanegia

New Member
Joined
Sep 11, 2009
Messages
2,680 (0.48/day)
Location
Reaching your left retina.
Sure, they could have also had some redundant parts in GF100 that they found they didn't need, and trimmed. Given the info out now, the cache thing made sense to me, so I've accepted it as fact.

CUDA isn't magically different than 3D(both are mathematical calculations using the exact same hardware)..to me, that's nV trying to ensure thier business customers buy the GPGPU products, and thier high prices, instead of normal Geforce cards. They simply didn't expose the functionality in driver. Like really, you're smart enough to not buy THAT hype...we both know that Tesla and Geforce products feature tha same silicon.

I was not implying silicon between Geforce and Tesla is different, but GF100 and GF110 is different, I was saying that I could only guess that being able to call for that cache setting in a DX environment required a small addition to the ISA. But maybe you're right and it's only on driver level, although do those two claims exclude each other anyway? I mean how much of the allegued ISA on modern GPUs has direct hardware implementation/execution and how much is microcode coming from the drivers? All I know is that I don't know jack about that.

As for the cache, yes it might be they reduced transistor count from the cache, but they didn't cut off any functionality to do so. In fact GF110 has greater functionality in almost every aspect and that's why I said that I think it is impressive. All the rumors were talking about a reduction thanks to cutting off the high FP64 capabilities, ECC and some other GPGPU related features, but it's everything there and at the same time they enhanced the FP16 filtering, so I just think it's quite a feat. Maybe it's not something that the average user or TPU enthusiast will say "wow!" about, but even the fact that you think that something must have been cut off already kinda demostrates the importance of the achievement.

That's something that does make me feel a little dissapointed about GF110 at the same time. I thought that it was going to be a gaming oriented chip and it's GF100 done right. That is good on its own and will help Nvidia a lot getting into HPC. It's faster, yes, it consumes a lot less, yes, and it's smaller, but can you just figure out what could Nvidia have released if ECC, FP64 had been ditched and if they had used the 48 SP config that is used in GF104?

I have talked long and enough about what I thought GF110 would be, mainly GF104 + 50%, the 576 SP moster that at the same time was smaller than GF100. Back then I considered it a posibility, right now I think it's a certainty and I'm becoming more inclined to support the group of people that think that Nvidia should find the way to separate their GPU and GPGPU efforts somehow and make 2 different chips, despite the similarities. And bear in mind that I fully understand why that is not posible, making a chip for a market that will hardly ever see 1 million sales is pointless, that's why I said "inclined to" and not sure about it, well maybe in the future, if HPC market grows enough as to make it worth...
 
Last edited:

Wile E

Power User
Joined
Oct 1, 2006
Messages
24,318 (3.65/day)
System Name The ClusterF**k
Processor 980X @ 4Ghz
Motherboard Gigabyte GA-EX58-UD5 BIOS F12
Cooling MCR-320, DDC-1 pump w/Bitspower res top (1/2" fittings), Koolance CPU-360
Memory 3x2GB Mushkin Redlines 1600Mhz 6-8-6-24 1T
Video Card(s) Evga GTX 580
Storage Corsair Neutron GTX 240GB, 2xSeagate 320GB RAID0; 2xSeagate 3TB; 2xSamsung 2TB; Samsung 1.5TB
Display(s) HP LP2475w 24" 1920x1200 IPS
Case Technofront Bench Station
Audio Device(s) Auzentech X-Fi Forte into Onkyo SR606 and Polk TSi200's + RM6750
Power Supply ENERMAX Galaxy EVO EGX1250EWT 1250W
Software Win7 Ultimate N x64, OSX 10.8.4
I'm talking about what Nvidia/ Jen-Hsun called fabric in the video where they spoke about the problems. It's not cache. It's the interconnection layer, which does connect cache with the SPs and SPs with TMUs and whatnot, but that's about the only relation there is with cache.

On top of that I don't know where did you hear that cache is smaller/simpler? Do you have a link I could read? What I have read is quite the opposite, from TechReport:



More cache capabilities indeed.

Also still has the same 768 KB of L2 cache. And ECC.



Still that doesn't change the fact that power consumption is more affected by clock speeds than enabled parts. It has always been like that and will never change. There's 20+ years of evidence I could bring in here, but I don't think I really need. Higher clock speeds also produce higher leakeage and well in general terms there's a very good reason why we are getting bigger and bigger CPU/GPU but never clocked much higher than 3 Ghz (CPU) and below 1 Ghz (GPU). Not exactly because they cannot go higher since they do go much higher with exotic cooling.

Actually, I'd like to see that evidence. I'm now curious as to which method actually does pull more power.

Actually, I'm willing to bet that you'll find examples that support both arguments.
 
Joined
Jan 11, 2009
Messages
9,250 (1.59/day)
Location
Montreal, Canada
System Name Homelabs
Processor Ryzen 5900x | Ryzen 1920X
Motherboard Asus ProArt x570 Creator | AsRock X399 fatal1ty gaming
Cooling Silent Loop 2 280mm | Dark Rock Pro TR4
Memory 128GB (4x32gb) DDR4 3600Mhz | 128GB (8x16GB) DDR4 2933Mhz
Video Card(s) EVGA RTX 3080 | ASUS Strix GTX 970
Storage Optane 900p + NVMe | Optane 900p + 8TB SATA SSDs + 48TB HDDs
Display(s) Alienware AW3423dw QD-OLED | HP Omen 32 1440p
Case be quiet! Dark Base Pro 900 rev 2 | be quiet! Silent Base 800
Power Supply Corsair RM750x + sleeved cables| EVGA P2 750W
Mouse Razer Viper Ultimate (still has buttons on the right side, crucial as I'm a southpaw)
Keyboard Razer Huntsman Elite, Pro Type | Logitech G915 TKL
It depends on the scenario and the card's architecture obviously, there is no cut in stone answer, it is experiment and see... they probably optimized it.

clock speeds affects the power usage in one way and enabling stuff in another, they just have to find the sweet spot, a little bit of math, and a lot of trial and error
 
Joined
Apr 26, 2009
Messages
517 (0.09/day)
Location
You are here.
System Name Prometheus
Processor Intel i7 14700K
Motherboard ASUS ROG STRIX B760-I
Cooling Noctua NH-D12L
Memory Corsair 32GB DDR5-7200
Video Card(s) MSI RTX 4070Ti Ventus 3X OC 12GB
Storage WD Black SN850 1TB
Display(s) DELL U4320Q 4K
Case SSUPD Meshroom D Fossil Gray
Audio Device(s) ASUS SupremeFX S1220A
Power Supply Corsair SF750 Platinum SFX
Mouse Razer Orochi V2
Keyboard Nuphy Air75 V2 White
Software Windows 11 Pro x64
There isn't a "4GHz" limit to the material used to build these chips. The limitation comes from the operating temperature, once it's over an estimated threshold it will not function correctly at a given frequency and in extreme cases it will be irreparably damaged.

Pure silicon can take more then 1400 C but in order to build actual chips you add to mix other materials (doping), and those will drag that number down to just 7-9%.

And you need these things to operate for years, so you can't set that threshold too high. All chips will eventually die, it's just a matter of time.

For example. There is a Pentium 4 that achieved 32.6 GHz (!!!) and there is an Athlon 64 that achieved 10.3 GHz. And yet today, we only have a 6.4 GHz six-core i7 CPU and a 7.1 GHz PhenomII X4 CPU... So the complexity of the chip, uArch, complexity of the workloads, the sheer size of the chip, basically EVERYTHING has to be taken into account.

So, to continue, yes frequency matters when you want to keep TDP down, but not more or less then the maturity of the node process, the complexity of the uArch, the size of the die... You can't just take one and say it's the singular thing that will drive TDP up. And when you build a chip, you need it to do something... you don't build a chip that can do a bazillion GHz and doesn't do anything else. Because you can do that.

20 years of history doesn't really apply. We can build stuff today that we only dreamed about years ago. And we will be able to build stuff tomorrow that we didn't even dream about today. We call it "evolution" because we like to honor the past, but the only thing that drives evolution is a series of "breakthroughs". That means doing things differently, better.
 
Joined
Oct 19, 2008
Messages
3,166 (0.54/day)
System Name White Theme
Processor Intel 12700K CPU
Motherboard ASUS STRIX Z690-A D4
Cooling Lian Li Galahad Uni w/ AL120 Fans
Memory 32GB DDR4-3200 Corsair Vengeance
Video Card(s) Gigabyte Aero 4080 Super 16GB
Storage 2TB Samsung 980 Pro PCIE 4.0
Display(s) Alienware 38" 3840x1600 (165Hz)
Case Lian Li O11 Dynamic EVO White
Audio Device(s) 2i2 Scarlett Solo + Schiit Magni 3 AMP
Power Supply Corsair HX 1000 Platinum
lol AMD could have overestimated the GTX580! They are probably just trying to increase their stock because they are now projected to sell more now that the 580 has been revealed. there's a theory!

That or... the usual tweaking the card to be faster than what it is..... then there's the component shortages official announcement... I hope the delay will make this series more worthwhile AMD :laugh:
 
Top