• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

AMD Zambezi ''Bulldozer'' Desktop CPU Roadmap Revealed

bear jesus

New Member
Joined
Aug 12, 2010
Messages
1,534 (0.30/day)
Location
Britland
System Name Gaming temp// HTPC
Processor AMD A6 5400k // A4 5300
Motherboard ASRock FM2A75 PRO4// ASRock FM2A55M-DGS
Cooling Xigmatek HDT-D1284 // stock phenom II HSF
Memory 4GB 1600mhz corsair vengeance // 4GB 1600mhz corsair vengeance low profile
Storage 64gb sandisk pulse SSD and 500gb HDD // 500gb HDD
Display(s) acer 22" 1680x1050
Power Supply Seasonic G-450 // Corsair CXM 430W
Of course I would be using AMDs plugin that uses a Radeon to render your videos, but they had to be AMD instead of ATI and say you need an AMD CPU. I'd rather not get a slower CPU just for the sake of a GPU plugin which more than likely won't help me much.

Wow that's like he biggest fail i have heard of in weeks, i assumed it worked with an intel cpu.
 

HillBeast

New Member
Joined
Jan 16, 2010
Messages
407 (0.08/day)
Location
New Zealand
System Name Kuja
Processor Intel Core i7 930
Motherboard Gigabyte X58A-UD3R
Cooling Corsair H50 HB.o Special Edition with Koolance CHC-122 NB Block
Memory OCZ Extreme Edition 4GB Dual Channel
Video Card(s) Sapphire Radeon 5870 Vapor-X Rev. 2
Storage 2x 1TB WD Green in RAID
Display(s) BenQ V2400W
Case Lian Li PC-A17 HB.o Special Edition
Audio Device(s) Onboard Realtek 889A
Power Supply Gigabyte Odin Pro 800W
Software Windows 7 Professional
Benchmark Scores 93632 sysPoints in sysTest '09 47 FPS in Star Tales Benchmark
ps hillbeast nv are quite good for gpu encodeing have you considered a cheap gt240 just for encodeing its quick too and you could get some hybrid physx bonus points too then just disable ati encodeing and your off

I could do that, but it would mean taking my GTX285 out :p

Already tried NVIDIA encoding, and it's the same deal: a little faster, but still no Adobe and no point in it.

CPU encoding is the best and will always be the best as there is no driver issues. We need a CPU which is a TRUE fusion of a CPU and a GPU, like Cell, but not so fail and better single threading.

Wow that's like he biggest fail i have heard of in weeks, i assumed it worked with an intel cpu.

Nah, the AMD plugin for Premiere only works on AMD CPU + ATI Radeon <- it will never be an AMD Radeon. Even AMD themselves think so when you look at the download drivers section: http://www.amd.com/au/Pages/AMDHomePage.aspx
 
Joined
Sep 3, 2010
Messages
3,535 (0.68/day)
Location
Netherlands
System Name ap201 | Odroid N2+ | NUC
Processor AMD Ryzen 5 3600 | Amlogic S922X | Intel Core i5-7260
Motherboard Gigabyte B550M DS3H |Odroid N2+ | NUC Board 7
Cooling Inter-Tech Argus SU-200, 3x Arctic P12 case fans | stock heatsink + fan | stock HSF
Memory Gskill Aegis DDR4 32GB | 4 GB DDR4 | 16 GB DDR4
Video Card(s) Sapphire Pulse RX 6600 (8GB) | Arm Mali G52 | Iris Plus 640
Storage SK Hynix 240GB, Sam. 840 + 850 EVO (2x (250 GB)| Samsung 850 Evo 500GB | WD Green 240 GB
Display(s) AOC G2260VWQ6 | LG 24MT57D |
Case Asus Prime 201 | Stock case (black version) | Stock case
Audio Device(s) integrated
Power Supply BeQuiet! Pure Power 11 400W | 12v barrel jack | 19V laptop brick (Asus)
Mouse Logitech G500 |Steelseries Rival 300 | no-name ergo mouse
Keyboard Qpad MK-50 (Cherry MX brown)| Blaze Keyboard
Software Windows 10, EndeavourOS | Gentoo Linux | EndeavourOS
Escalade, both top end "parts"

Are you sure? As a car to use in town it's a far from smart choice (with all that traffic a Smart, Citroën DS 3, Volkswagen Golf, Ford Focus etc. more practical IMHO) and if you need an off-road vehicle you better get a Range Rover, or a Jeep.
And IIRC from a review the Escalade was all plastic from inside (as most US cars are).
 
Joined
Nov 4, 2005
Messages
11,960 (1.72/day)
System Name Compy 386
Processor 7800X3D
Motherboard Asus
Cooling Air for now.....
Memory 64 GB DDR5 6400Mhz
Video Card(s) 7900XTX 310 Merc
Storage Samsung 990 2TB, 2 SP 2TB SSDs, 24TB Enterprise drives
Display(s) 55" Samsung 4K HDR
Audio Device(s) ATI HDMI
Mouse Logitech MX518
Keyboard Razer
Software A lot.
Benchmark Scores Its fast. Enough.
I could do that, but it would mean taking my GTX285 out :p

Already tried NVIDIA encoding, and it's the same deal: a little faster, but still no Adobe and no point in it.

CPU encoding is the best and will always be the best as there is no driver issues. We need a CPU which is a TRUE fusion of a CPU and a GPU, like Cell, but not so fail and better single threading.



Nah, the AMD plugin for Premiere only works on AMD CPU + ATI Radeon <- it will never be an AMD Radeon. Even AMD themselves think so when you look at the download drivers section: http://www.amd.com/au/Pages/AMDHomePage.aspx

Nvidia still works with CS4 and up, the full version, however some of the forum members there have been able to use a ATI card and get the same effects by startign the computer and Adobe with the Nvidia card attached and primary, then once it is open and running switching back to the ATI card. So Adobe & Nvidia are just as guilty as ATI is at this shit.

I have AMD/ATI and the stream encoder/decoder don't work with the program they are supposed to work with. So AMD can kiss my ass with their digital dream. Once they make good with the promises I have paid for, with my last five cards from them I might consider them again, but really I am moving to Nvidia once they get their issues fixed.
 

HillBeast

New Member
Joined
Jan 16, 2010
Messages
407 (0.08/day)
Location
New Zealand
System Name Kuja
Processor Intel Core i7 930
Motherboard Gigabyte X58A-UD3R
Cooling Corsair H50 HB.o Special Edition with Koolance CHC-122 NB Block
Memory OCZ Extreme Edition 4GB Dual Channel
Video Card(s) Sapphire Radeon 5870 Vapor-X Rev. 2
Storage 2x 1TB WD Green in RAID
Display(s) BenQ V2400W
Case Lian Li PC-A17 HB.o Special Edition
Audio Device(s) Onboard Realtek 889A
Power Supply Gigabyte Odin Pro 800W
Software Windows 7 Professional
Benchmark Scores 93632 sysPoints in sysTest '09 47 FPS in Star Tales Benchmark
Nvidia still works with CS4 and up, the full version, however some of the forum members there have been able to use a ATI card and get the same effects by startign the computer and Adobe with the Nvidia card attached and primary, then once it is open and running switching back to the ATI card. So Adobe & Nvidia are just as guilty as ATI is at this shit.

I have AMD/ATI and the stream encoder/decoder don't work with the program they are supposed to work with. So AMD can kiss my ass with their digital dream. Once they make good with the promises I have paid for, with my last five cards from them I might consider them again, but really I am moving to Nvidia once they get their issues fixed.


It's really not a matter of getting acceleration inside Premiere, I want the acceleration in Media Encoder when I'm exporting my work. Doesn't matter what I have there because Adobe can't be bothered putting it in OpenCL or CUDA or DirectCompute or whatever you want to favour. Odd that they CUDA'd up Premiere CS5 and RUINED that, but left Media Encoder alone. I guess programming encoders to work with OpenCL (or whatever library you favour) just isn't easy and what you get is really low quality footage, not something I want.
 
Joined
Jan 2, 2009
Messages
9,899 (1.71/day)
Location
Essex, England
System Name My pc
Processor Ryzen 5 3600
Motherboard Asus Rog b450-f
Cooling Cooler master 120mm aio
Memory 16gb ddr4 3200mhz
Video Card(s) MSI Ventus 3x 3070
Storage 2tb intel nvme and 2tb generic ssd
Display(s) Generic dell 1080p overclocked to 75hz
Case Phanteks enthoo
Power Supply 650w of borderline fire hazard
Mouse Some wierd Chinese vertical mouse
Keyboard Generic mechanical keyboard
Software Windows ten
Nvidia still works with CS4 and up, the full version, however some of the forum members there have been able to use a ATI card and get the same effects by startign the computer and Adobe with the Nvidia card attached and primary, then once it is open and running switching back to the ATI card. So Adobe & Nvidia are just as guilty as ATI is at this shit.

I have AMD/ATI and the stream encoder/decoder don't work with the program they are supposed to work with. So AMD can kiss my ass with their digital dream. Once they make good with the promises I have paid for, with my last five cards from them I might consider them again, but really I am moving to Nvidia once they get their issues fixed.


Did you actually installed stream?

It's not installed by default so a lot of things don't work if you just try running them.

You have to download the sdk to get it working.

Shit flies when decoding on my gpus!
 

Wile E

Power User
Joined
Oct 1, 2006
Messages
24,318 (3.68/day)
System Name The ClusterF**k
Processor 980X @ 4Ghz
Motherboard Gigabyte GA-EX58-UD5 BIOS F12
Cooling MCR-320, DDC-1 pump w/Bitspower res top (1/2" fittings), Koolance CPU-360
Memory 3x2GB Mushkin Redlines 1600Mhz 6-8-6-24 1T
Video Card(s) Evga GTX 580
Storage Corsair Neutron GTX 240GB, 2xSeagate 320GB RAID0; 2xSeagate 3TB; 2xSamsung 2TB; Samsung 1.5TB
Display(s) HP LP2475w 24" 1920x1200 IPS
Case Technofront Bench Station
Audio Device(s) Auzentech X-Fi Forte into Onkyo SR606 and Polk TSi200's + RM6750
Power Supply ENERMAX Galaxy EVO EGX1250EWT 1250W
Software Win7 Ultimate N x64, OSX 10.8.4
Are you sure? As a car to use in town it's a far from smart choice (with all that traffic a Smart, Citroën DS 3, Volkswagen Golf, Ford Focus etc. more practical IMHO) and if you need an off-road vehicle you better get a Range Rover, or a Jeep.
And IIRC from a review the Escalade was all plastic from inside (as most US cars are).

1.) I'm in the US. They work fine here. Our roads and parking spaces are plenty large enough for SUVs.

2.) Like it or not, they are one of Cadillac's top of the line premium models.

3.) I didn't compare it to other makers. I used Cadillac as a parallel to AMD. I didn't say it was the best car in the world, I used Escalade to prove the point that poeple know what brand it is because of the premium models. I never once mentioned the value of the car, or compared it to other makers.

You have completely missed the point of the exercise.
 
Joined
Sep 3, 2010
Messages
3,535 (0.68/day)
Location
Netherlands
System Name ap201 | Odroid N2+ | NUC
Processor AMD Ryzen 5 3600 | Amlogic S922X | Intel Core i5-7260
Motherboard Gigabyte B550M DS3H |Odroid N2+ | NUC Board 7
Cooling Inter-Tech Argus SU-200, 3x Arctic P12 case fans | stock heatsink + fan | stock HSF
Memory Gskill Aegis DDR4 32GB | 4 GB DDR4 | 16 GB DDR4
Video Card(s) Sapphire Pulse RX 6600 (8GB) | Arm Mali G52 | Iris Plus 640
Storage SK Hynix 240GB, Sam. 840 + 850 EVO (2x (250 GB)| Samsung 850 Evo 500GB | WD Green 240 GB
Display(s) AOC G2260VWQ6 | LG 24MT57D |
Case Asus Prime 201 | Stock case (black version) | Stock case
Audio Device(s) integrated
Power Supply BeQuiet! Pure Power 11 400W | 12v barrel jack | 19V laptop brick (Asus)
Mouse Logitech G500 |Steelseries Rival 300 | no-name ergo mouse
Keyboard Qpad MK-50 (Cherry MX brown)| Blaze Keyboard
Software Windows 10, EndeavourOS | Gentoo Linux | EndeavourOS
1.) I'm in the US. They work fine here. Our roads and parking spaces are plenty large enough for SUVs.

Trying to get through a downtown district of a European city is often a different matter though. Like sh** through a funnel, so most folks are trying to keep the size of their turds to a limit. ;)
Certain cities even ask fees if people wish to go through the downtown district by car (to disencourage the use of them and encourage people to go by bike, scooter, mass transit or whatever) , because of the traffic congestion. So SUV drivers are, when using such vehicles downtown, quite being frowned upon, because they hinder the other traffic so much.

And in Asia the traffic is often even worse than in Europe.

2.) Like it or not, they are one of Cadillac's top of the line premium models.

I should have realised that. My bad.

3.) I didn't compare it to other makers. I used Cadillac as a parallel to AMD. I didn't say it was the best car in the world, I used Escalade to prove the point that poeple know what brand it is because of the premium models. I never once mentioned the value of the car, or compared it to other makers.
If you want to make such a point, better pick a more globally known car brand as a parrallel (Asian or Euro brands, like Hyundai, Toyota, Mercedes, BMW, etc.). Cars like the Escalade are barely known here, I actually only know it because of tv. The only US brands selling well in Europe are Ford and General Motors (under the names of Chevrolet (mostly former Daewoo models) and Opel/Vauxhall).

You have completely missed the point of the exercise.

Not completly my fault, IMHO, if your parallels don't work that well for non-Americans. I mean, I can try my best of course but there is some chance of "failure". ;)
 
Last edited:

Wile E

Power User
Joined
Oct 1, 2006
Messages
24,318 (3.68/day)
System Name The ClusterF**k
Processor 980X @ 4Ghz
Motherboard Gigabyte GA-EX58-UD5 BIOS F12
Cooling MCR-320, DDC-1 pump w/Bitspower res top (1/2" fittings), Koolance CPU-360
Memory 3x2GB Mushkin Redlines 1600Mhz 6-8-6-24 1T
Video Card(s) Evga GTX 580
Storage Corsair Neutron GTX 240GB, 2xSeagate 320GB RAID0; 2xSeagate 3TB; 2xSamsung 2TB; Samsung 1.5TB
Display(s) HP LP2475w 24" 1920x1200 IPS
Case Technofront Bench Station
Audio Device(s) Auzentech X-Fi Forte into Onkyo SR606 and Polk TSi200's + RM6750
Power Supply ENERMAX Galaxy EVO EGX1250EWT 1250W
Software Win7 Ultimate N x64, OSX 10.8.4
Trying to get through a downtown district of a European city is often a different matter though. Like sh** through a funnel, so most folks are trying to keep the size of their turds to a limit. ;)
Certain cities even ask fees if people wish to go through the downtown district by car (to disencourage the use of them and encourage people to go by bike, scooter, mass transit or whatever) , because of the traffic congestion. So SUV drivers are, when using such vehicles downtown, quite being frowned upon, because they hinder the other traffic so much.

And in Asia the traffic is often even worse than in Europe.
I understand that. But I don't live there, so I can't give any good examples for you. Most of the forum is from the US, so that's just what I'm used to catering to, that's all.

I should have realised that. My bad.


If you want to make such a point, better pick a more globally known car brand as a parrallel (Asian or Euro brands, like Hyundai, Toyota, Mercedes, BMW, etc.). Cars like the Escalade are barely known here, I actually only know it because of tv. The only US brands selling well in Europe are Ford and General Motors (under the names of Chevrolet (mostly former Daewoo models) and Opel/Vauxhall).
See, you just made my point. If not for the exposure of the top of the line Cadillac model on TV, would you have known about it? Same principle can hold true for AMD. If they put out a product that gets exposure because it rests at the top end, they'll get more brand recognition.

Not completly my fault, IMHO, if your parallels don't work that well for non-Americans. I mean, I can try my best of course but there is some chance of "failure". ;)
I agree, not really your fault at all. Like I said, I'm just putting into perspective for the majority of this forum. Feel free to substitute an example that better applies to your region.
 
Joined
Nov 4, 2005
Messages
11,960 (1.72/day)
System Name Compy 386
Processor 7800X3D
Motherboard Asus
Cooling Air for now.....
Memory 64 GB DDR5 6400Mhz
Video Card(s) 7900XTX 310 Merc
Storage Samsung 990 2TB, 2 SP 2TB SSDs, 24TB Enterprise drives
Display(s) 55" Samsung 4K HDR
Audio Device(s) ATI HDMI
Mouse Logitech MX518
Keyboard Razer
Software A lot.
Benchmark Scores Its fast. Enough.
I installed the 115MB stream package, made no difference in Adobe, it allowed for 3% more use of the GPU over previous, so a total of 12% to transcode a 1.81GB 24MBPS 1080i M2TS file to a 1080i MPEG4/DIVX It was still insanely slow.


So either I choose to shoot at lower resolution than a few years old Canon common format high def camcorder can do and go back to 90's formats, or suck it up and continue to spend days on projects.


Yeah, ATI/AMD can suck it.
 

hobgoblin351

New Member
Joined
Nov 25, 2010
Messages
1 (0.00/day)
So, since I would like upgrade to Bulldozer, but cant wait that long. It seems that an upgrade to an AMD+ board with an AMD3 chip is the way to go. Then I can upgrade to Bulldozer intime when my pockets are deeper. Has anyone heard of the AMD3+ boards being worked on or realease dates? I cant seem to find anything on them. And if JF-AMD has a stack of chips on his desk, then is it safe to assume that ASUS, GYGABYTE, and the rest of them have some and are working on putting out AMD3+ boards. Why have we heard nothing about them?
 

Fourstaff

Moderator
Staff member
Joined
Nov 29, 2009
Messages
10,072 (1.85/day)
Location
Home
System Name Orange! // ItchyHands
Processor 3570K // 10400F
Motherboard ASRock z77 Extreme4 // TUF Gaming B460M-Plus
Cooling Stock // Stock
Memory 2x4Gb 1600Mhz CL9 Corsair XMS3 // 2x8Gb 3200 Mhz XPG D41
Video Card(s) Sapphire Nitro+ RX 570 // Asus TUF RTX 2070
Storage Samsung 840 250Gb // SX8200 480GB
Display(s) LG 22EA53VQ // Philips 275M QHD
Case NZXT Phantom 410 Black/Orange // Tecware Forge M
Power Supply Corsair CXM500w // CM MWE 600w
So, since I would like upgrade to Bulldozer, but cant wait that long. It seems that an upgrade to an AMD+ board with an AMD3 chip is the way to go. Then I can upgrade to Bulldozer intime when my pockets are deeper. Has anyone heard of the AMD3+ boards being worked on or realease dates? I cant seem to find anything on them. And if JF-AMD has a stack of chips on his desk, then is it safe to assume that ASUS, GYGABYTE, and the rest of them have some and are working on putting out AMD3+ boards. Why have we heard nothing about them?

AM3+ mobos will be out in the second quarter, which is still a looooooong way from where we are right now.
 

HillBeast

New Member
Joined
Jan 16, 2010
Messages
407 (0.08/day)
Location
New Zealand
System Name Kuja
Processor Intel Core i7 930
Motherboard Gigabyte X58A-UD3R
Cooling Corsair H50 HB.o Special Edition with Koolance CHC-122 NB Block
Memory OCZ Extreme Edition 4GB Dual Channel
Video Card(s) Sapphire Radeon 5870 Vapor-X Rev. 2
Storage 2x 1TB WD Green in RAID
Display(s) BenQ V2400W
Case Lian Li PC-A17 HB.o Special Edition
Audio Device(s) Onboard Realtek 889A
Power Supply Gigabyte Odin Pro 800W
Software Windows 7 Professional
Benchmark Scores 93632 sysPoints in sysTest '09 47 FPS in Star Tales Benchmark
So, since I would like upgrade to Bulldozer, but cant wait that long.

Just wait. It's not that hard. I want the funds to but parts for my new big project but don't at the moment. I'm not going to rob a bank to get it. Just wait for it to come out and wait for Bulldozer to come out before deciding you want it. Planning to buy something before there are even benchmarks of it means the only reason you want it is marketing speak. It may arrive and be awesome, but it could also arrive and be a piece of crap.

Just wait.
 

LightningHertz

New Member
Joined
Dec 27, 2010
Messages
2 (0.00/day)
Give me Memory Controllers, or Give Me... A solar calculator

Cleint workloads rarely saturate the memory bus. We'll be adding ~50% more throughput than current designs.

Benchmarks and real life usage are 2 different things.

For every person that is bottlenecked on today's systems there are a million others on the interwebs that are not even saturating 1/3 of their memory bandwidth.


People are getting hung up on number of channel instead of focusing on the amount of bandwidth they can actually achieve and the amount of bandwidth their applications require.

...and rightfully they should. '50% more processing power' was the description given for the Opteron replacement; and even then, that doesn't say anything at all about the rest of the hardware. Number of channels of lower speed, lower latency dedicated memory per core is what gives real-world throughput, not just bench numbers.

(I apologize ahead of time for any meanderings that may be momentary rants, lol. I mean to mostly be informative, and hopefully with some humor, but atm I'm a bit upset with AMD ;) )

People can't saturate their memory bandwidth because it can't be done. The bus is fine; the access is what is lacking. The problem is hardware limitations when you try to address the same target with too many cores, and greater ram speed is almost moot: Fewer dedicated memory paths than the amount of cores causes contention among cores, among many other things I mention below. You can still fill your bucket (memory) slowly, with a slow hose (low # of ram channels @ higher speed = higher memory controller & strap latencies, memory latencies, core-contention, bank, rank, controller interleaving, all while refreshing, strange ratios, etc) -but this is not 'performance' when it is time that we want to save. I can't just be excited that I can run 6 things 'ok.'

Case in point:
Look at this study performed by Sandia National Labratories in ALBUQUERQUE, N.M. (Please note that they are terming multi-core systems as 'supercomputers'):

https://share.sandia.gov/news/resou...lower-supercomputing-sandia-simulation-shows/


In a related occurrence, look at how the 'cpu race' topped out. More speed in a single core just resulted in a performance to consumption ratio just made a really good shin-cooker. Instead, the answer was a smaller die process with low-power, moderate speed SMP cores; much like nVidia's Cuda or ATI's Stream. Memory controllers/ram is no different.

What good is a ridiculously fast DDR HT-bus when you can't send solid, concurrent dedicated data streams down it to memory because of all the turn-taking and latencies? It's like taking turns with water spigots down a huge hose that has a small nozzle on it.
You can't achieve high throughput with current (and near-future) configurations. Notice that as soon as we said "Yay! Dual-core AND dual channel ram!" it quickly became, "what do you mean "controller interleave?" ...But then - the fix: Ganged memory at half the data width.
...And there was (not) much rejoicing. (yay.)

Why did they do this?
(My opinion only): It continues, always more, to look like it's because they won't ever, ever give you what you want, and it doesn't matter who the manufacturer is. They need you NEEDING the next model after only 6 months. Look at these graphics cards today. I almost spit out my coffee when I read the benchmarks for some of the most recent cards, priced from $384 to $1100. At 800 mHz and up, some sporting dual gpu and high-speed 1GB-2GB gddr5 getting LESS THAN HALF of the framerates of my 5 year old 600 mhz, ddr3 512MB, pci-e card; same software, with all eye-candy on, and my processor is both older and slower than those showcased. It's obviously not a polygon-math issue. What's going on? Are we going backwards? I can only guess that Cuda and Stream have cores that fight over the memory with a bit-width that is still behind.

I also do 3d animation on the same system that I game with, transcode movies, etc, etc, etc. So far, I have tested both Intel and AMD multi-core multi-threading with real-world, specifically compiled software only (thanks to Intel's filthy compiler tricks.) Engaging additional cores just results in them starving for memory access in a linear fashion. In addition, so far all my tests suggest that no more than 4GB of DDR3-1066 @ 5-5-5-15-30 can be filled on dual channel... at all, on 2 through 6 core systems. (On a side note: WOW- my Intel C2D machine, tested with non-Intel compiled (lying) software, performs like an armless drunk trying to juggle corded telephones with his face.)

Anyway, the memory speed needed for current configurations would be well over what is currently available to even match parallel processing performance for the 3:1 [core : mem controller] ratio when you're done accounting for latencies, scheduling, northbridge strap, throughput reduction due to spread-spectrum because of the high frequency, etc, etc.

So in conclusion, more parallel, lower speed, low latency controllers and memory modules (with additional, appropriate hardware) could generate a system with a far greater level of real, usable throughput. Because I would much prefer, but will most likely not be able to afford, a Valencia quad-core (server core... for gaming too? -and then only if it had concurrent memory access,) - it looks like I'm giving up before Bulldozer even gets here.

6 cores and 2 channels for Zambezi? No thanks.
I'm tired of waiting, both for my renders, and a pc that renders 30 seconds in less than 3 days. ;)

(One final aside): About these 'modules' on the Bulldozer- wouldn't that rub additionally if it's the 'core 'a' passes throughput to core 'b'' design that was created in some of the first dual cores? Time will tell.

~Peace
 
Joined
Nov 4, 2005
Messages
11,960 (1.72/day)
System Name Compy 386
Processor 7800X3D
Motherboard Asus
Cooling Air for now.....
Memory 64 GB DDR5 6400Mhz
Video Card(s) 7900XTX 310 Merc
Storage Samsung 990 2TB, 2 SP 2TB SSDs, 24TB Enterprise drives
Display(s) 55" Samsung 4K HDR
Audio Device(s) ATI HDMI
Mouse Logitech MX518
Keyboard Razer
Software A lot.
Benchmark Scores Its fast. Enough.
...and rightfully they should. '50% more processing power' was the description given for the Opteron replacement; and even then, that doesn't say anything at all about the rest of the hardware. Number of channels of lower speed, lower latency dedicated memory per core is what gives real-world throughput, not just bench numbers.

(I apologize ahead of time for any meanderings that may be momentary rants, lol. I mean to mostly be informative, and hopefully with some humor, but atm I'm a bit upset with AMD ;) )

People can't saturate their memory bandwidth because it can't be done. The bus is fine; the access is what is lacking. The problem is hardware limitations when you try to address the same target with too many cores, and greater ram speed is almost moot: Fewer dedicated memory paths than the amount of cores causes contention among cores, among many other things I mention below. You can still fill your bucket (memory) slowly, with a slow hose (low # of ram channels @ higher speed = higher memory controller & strap latencies, memory latencies, core-contention, bank, rank, controller interleaving, all while refreshing, strange ratios, etc) -but this is not 'performance' when it is time that we want to save. I can't just be excited that I can run 6 things 'ok.'

Case in point:
Look at this study performed by Sandia National Labratories in ALBUQUERQUE, N.M. (Please note that they are terming multi-core systems as 'supercomputers'):

https://share.sandia.gov/news/resou...lower-supercomputing-sandia-simulation-shows/


In a related occurrence, look at how the 'cpu race' topped out. More speed in a single core just resulted in a performance to consumption ratio just made a really good shin-cooker. Instead, the answer was a smaller die process with low-power, moderate speed SMP cores; much like nVidia's Cuda or ATI's Stream. Memory controllers/ram is no different.

What good is a ridiculously fast DDR HT-bus when you can't send solid, concurrent dedicated data streams down it to memory because of all the turn-taking and latencies? It's like taking turns with water spigots down a huge hose that has a small nozzle on it.
You can't achieve high throughput with current (and near-future) configurations. Notice that as soon as we said "Yay! Dual-core AND dual channel ram!" it quickly became, "what do you mean "controller interleave?" ...But then - the fix: Ganged memory at half the data width.
...And there was (not) much rejoicing. (yay.)

Why did they do this?
(My opinion only): It continues, always more, to look like it's because they won't ever, ever give you what you want, and it doesn't matter who the manufacturer is. They need you NEEDING the next model after only 6 months. Look at these graphics cards today. I almost spit out my coffee when I read the benchmarks for some of the most recent cards, priced from $384 to $1100. At 800 mHz and up, some sporting dual gpu and high-speed 1GB-2GB gddr5 getting LESS THAN HALF of the framerates of my 5 year old 600 mhz, ddr3 512MB, pci-e card; same software, with all eye-candy on, and my processor is both older and slower than those showcased. It's obviously not a polygon-math issue. What's going on? Are we going backwards? I can only guess that Cuda and Stream have cores that fight over the memory with a bit-width that is still behind.

I also do 3d animation on the same system that I game with, transcode movies, etc, etc, etc. So far, I have tested both Intel and AMD multi-core multi-threading with real-world, specifically compiled software only (thanks to Intel's filthy compiler tricks.) Engaging additional cores just results in them starving for memory access in a linear fashion. In addition, so far all my tests suggest that no more than 4GB of DDR3-1066 @ 5-5-5-15-30 can be filled on dual channel... at all, on 2 through 6 core systems. (On a side note: WOW- my Intel C2D machine, tested with non-Intel compiled (lying) software, performs like an armless drunk trying to juggle corded telephones with his face.)

Anyway, the memory speed needed for current configurations would be well over what is currently available to even match parallel processing performance for the 3:1 [core : mem controller] ratio when you're done accounting for latencies, scheduling, northbridge strap, throughput reduction due to spread-spectrum because of the high frequency, etc, etc.

So in conclusion, more parallel, lower speed, low latency controllers and memory modules (with additional, appropriate hardware) could generate a system with a far greater level of real, usable throughput. Because I would much prefer, but will most likely not be able to afford, a Valencia quad-core (server core... for gaming too? -and then only if it had concurrent memory access,) - it looks like I'm giving up before Bulldozer even gets here.

6 cores and 2 channels for Zambezi? No thanks.
I'm tired of waiting, both for my renders, and a pc that renders 30 seconds in less than 3 days. ;)

(One final aside): About these 'modules' on the Bulldozer- wouldn't that rub additionally if it's the 'core 'a' passes throughput to core 'b'' design that was created in some of the first dual cores? Time will tell.

~Peace

While some of your post was accurate. Your old graphics card theory is bullshit.

Your eye candy is due to DX rendering paths, your older card will render in DX 8 or 9 at OK framerates, but the newer cards will struggle to render all the advanced features of DX11 that make the small differences.


Try HL2 CM 10 with all high settings. I can do it, why can't you?

But yes you are right about core starvation, I see it happen on my system, increasing the core speed helps alleviate the problem to a small degree, just the effect of reduced latencies, adding more RAM won't help, higher RAM speed won't help, we are entering the era of needing 1 stick of 2GB RAM for one core on its own memory path, or being able to read and write to different parts of RAM and track what becomes available to read and write to and where it is for the next step in the process. Almost like RAM RAID.
 

LightningHertz

New Member
Joined
Dec 27, 2010
Messages
2 (0.00/day)
While some of your post was accurate. Your old graphics card theory is bullshit.

Your eye candy is due to DX rendering paths, your older card will render in DX 8 or 9 at OK framerates, but the newer cards will struggle to render all the advanced features of DX11 that make the small differences.

Try HL2 CM 10 with all high settings. I can do it, why can't you?
But yes you are right about core starvation, I see it happen on my system, increasing the core speed helps alleviate the problem to a small degree, just the effect of reduced latencies, adding more RAM won't help, higher RAM speed won't help, we are entering the era of needing 1 stick of 2GB RAM for one core on its own memory path, or being able to read and write to different parts of RAM and track what becomes available to read and write to and where it is for the next step in the process. Almost like RAM RAID.

Hi... Steevo. I'm glas that you are also able to reproduce some of these observations.

I feel I need to clear some things up though. Before firing of with 'bullshit':

1) I developed no such theory; it was purely a facetious 'speculation' based upon a real-world observation; As usual, lack of inflection in writing causes these problems. I will annotate such comments in the future, seeing as the words 'I can only guess' doesn't appear to get that idea across.
2) Direct X 11 rendering paths have nothing to do with DX 9 benchmarks of DX 9 games just because it's tested on DX11 compatible hardware.
3) I am only familiar with HL2. I don't know why you're asking why I can't run something I neither mentioned nor tried. But now that you bring it up, DX 11, like DX 10, were touted as being easier to render by compatible hardware and thus requiring less processing power. Why then, if you want to compare DX9 to 10 or 11, are respective framerates much lower in these title add-ons, with even more powerful hardware at the same resolutions?

Thanks.
Have a nice day.
 
Last edited:
Joined
Jan 20, 2010
Messages
868 (0.16/day)
Location
Toronto, ON. Canada
System Name Gamers PC
Processor AMD Phenom II X4 965 BE @ 3.80 GHz
Motherboard MSI 790FX-GD70 AM3
Cooling Corsair H50 Cooler
Memory Corsair XMS3 4GB (2x2GB) DDR3-1333
Video Card(s) XFX Radeon HD 5770 1GB GDDR5
Storage 2 x WD Caviar Green 1TB SATA300 w/64MB Buffer (RAID 0)
Display(s) Samsung 2494SW 1080p 24" WS LCD HD
Case CM HAF 932 Full Tower Case
Audio Device(s) Creative SB X-FI TITANIUM -PCIE x 1
Power Supply Corsair TX Series CMPSU-650TX (650W)
Software Windows 7 Ultimate 64-bit
So we have Quad-Channel DDR3 IMC for Server/Workstation CPU's and Dual-Channel DDR3 for Desktop. That really sucks, AMD should have stuck with Quad-Channel support to further boost memory perfomrance which today they greatly lack big time.

- Native DDR3-1866 Memory Support [8]
- Dual Channel DDR3 Integrated Memory Controller (Support for PC3-15000 (DDR3-1866)) for Desktop, Quad Channel DDR3 Integrated Memory Controller (support for PC-12800 (DDR3-1600) and Registered DDR3)[9] for Server/Workstation (New Opteron Valencia and Interlagos)
http://en.wikipedia.org/wiki/Bulldozer_(processor)#Microarchitecture
 

JF-AMD

AMD Rep (Server)
Joined
Dec 18, 2009
Messages
163 (0.03/day)
So, Intel was at triple channel. Performance in many cases showed no appreciable difference between dual channel and triple channel. With SB they moved back to dual channel.

AMD is delivering dual channel with up to 50% greater throughput than current products. Isn't that a better option?

As to quad channel on the desktop, if triple channel was not clearly a product differentiator, why would quad be any better? Sometimes people get caught up in the specs but they don't focus on the output.

If dual channel and quad channel were about the same in throughput would you rather have dual channel with lower cost and lower power or quad with higher cost and higher power?
 

cdawall

where the hell are my stars
Joined
Jul 23, 2006
Messages
27,680 (4.15/day)
Location
Houston
System Name All the cores
Processor 2990WX
Motherboard Asrock X399M
Cooling CPU-XSPC RayStorm Neo, 2x240mm+360mm, D5PWM+140mL, GPU-2x360mm, 2xbyski, D4+D5+100mL
Memory 4x16GB G.Skill 3600
Video Card(s) (2) EVGA SC BLACK 1080Ti's
Storage 2x Samsung SM951 512GB, Samsung PM961 512GB
Display(s) Dell UP2414Q 3840X2160@60hz
Case Caselabs Mercury S5+pedestal
Audio Device(s) Fischer HA-02->Fischer FA-002W High edition/FA-003/Jubilate/FA-011 depending on my mood
Power Supply Seasonic Prime 1200w
Mouse Thermaltake Theron, Steam controller
Keyboard Keychron K8
Software W10P
So, Intel was at triple channel. Performance in many cases showed no appreciable difference between dual channel and triple channel. With SB they moved back to dual channel.

AMD is delivering dual channel with up to 50% greater throughput than current products. Isn't that a better option?

As to quad channel on the desktop, if triple channel was not clearly a product differentiator, why would quad be any better? Sometimes people get caught up in the specs but they don't focus on the output.

If dual channel and quad channel were about the same in throughput would you rather have dual channel with lower cost and lower power or quad with higher cost and higher power?

is sandybridge considered in the 50% greater throughput because it has already shown vast improvements over previous core and thuban products.
 

JF-AMD

AMD Rep (Server)
Joined
Dec 18, 2009
Messages
163 (0.03/day)
beats me, I am a server guy, I have no idea what the throughputs look like on the client side.
 

cdawall

where the hell are my stars
Joined
Jul 23, 2006
Messages
27,680 (4.15/day)
Location
Houston
System Name All the cores
Processor 2990WX
Motherboard Asrock X399M
Cooling CPU-XSPC RayStorm Neo, 2x240mm+360mm, D5PWM+140mL, GPU-2x360mm, 2xbyski, D4+D5+100mL
Memory 4x16GB G.Skill 3600
Video Card(s) (2) EVGA SC BLACK 1080Ti's
Storage 2x Samsung SM951 512GB, Samsung PM961 512GB
Display(s) Dell UP2414Q 3840X2160@60hz
Case Caselabs Mercury S5+pedestal
Audio Device(s) Fischer HA-02->Fischer FA-002W High edition/FA-003/Jubilate/FA-011 depending on my mood
Power Supply Seasonic Prime 1200w
Mouse Thermaltake Theron, Steam controller
Keyboard Keychron K8
Software W10P
beats me, I am a server guy, I have no idea what the throughputs look like on the client side.

haha ok thanks for at least being honest with me. :toast:
 

AlphaGeek

New Member
Joined
Apr 22, 2011
Messages
2 (0.00/day)
More cache PLZ DAMMIT!

Also meh.

I do not think these will be the saving grace AMD needs.

The the only thing that will allow AMD to compete and save them from their slump is developing a Mainboard/CPU that can handle triple channel RAM. And even that won't last now since Intel has leaked rumors of Quad channel RAM!!!
 
Joined
Jan 24, 2011
Messages
508 (0.10/day)
Location
Upright down-under (Brisbane, Australia)
System Name Frankenstein v7
Processor Intel i7 2600K (@ stock)
Motherboard Asus P8Z68-V
Cooling Corsair H100
Memory Corsair Vengeance 4x4GB DDR3 @ 1866MHz
Video Card(s) Gigabyte 7870 WindForce 2GB
Storage Samsung F3 1TB HDD
Display(s) Samsung S24C750
Case Antec P280
Audio Device(s) Creative Labs X-Fi Elite Pro
Power Supply Corsair HX850
Software Windows 7 Pro - 64-bit
The the only thing that will allow AMD to compete and save them from their slump is developing a Mainboard/CPU that can handle triple channel RAM. And even that won't last now since Intel has leaked rumors of Quad channel RAM!!!

That's unlikely to make a real-world difference - hence the step Intel took in going back to dual-channel.
AMD already has quad-channel, just not for the desktop.
What would make a proper difference, if dual-channel could be coupled to a per-core setup.
 

AlphaGeek

New Member
Joined
Apr 22, 2011
Messages
2 (0.00/day)
That's unlikely to make a real-world difference - hence the step Intel took in going back to dual-channel.
AMD already has quad-channel, just not for the desktop.
What would make a proper difference, if dual-channel could be coupled to a per-core setup.

This is very true. I would love to see anybody, AMD OR Intel, do a per-core setup with their RAM, dual, triple, or quad channel!!!
 
Top