# AMD A10-5800K "Trinity" APU Tested



## btarunr (Mar 22, 2012)

Later this year, AMD will unveil its second-generation accelerated processing units (APUs) in the FM2 package, based on its brand-new "Piledriver" CPU and "Graphics CoreNext" GPU architectures. Among these, the part that is designed keeping overclockers in mind is the A10-5800K, which features an unlocked base clock multiplier, four x86-64 cores, 3.80 GHz (nominal) and 4.20 GHz Turbo Core clock speed, and AMD Radeon HD 7660D graphics. Find out more about the lineup here. 

INPAI got its hands on an A10-5800K APU, and supporting socket FM2 motherboard, and wasted no time in comparing it to the current-generation A8-3850. INPAI put the two chips through SuperPi 1M, to measure single-thread performance, and 3DMark 06, to measure embedded-GPU performance. In SuperPi, A10-5800K crunched SuperPi 1M in 23.775 s, the A8-3850 did the same in 26.039 s. With 3DMark 06, the A10-5800K scored 9396 points, while the A8-3850 scored 6223. The inference that can be drawn out of this little test is that Trinity has significantly faster graphics, not so much CPU (taking into account A10-5800K cores were clocked over 30% higher than those of the A8-3850).



 



*View at TechPowerUp Main Site*


----------



## Vulpesveritas (Mar 22, 2012)

Couple things...

-Aren't Piledriver cores supposed to have 32kb L1 cache?
-Why does the A10 system only have 2GB of RAM when the Llano system have 8GB?
-That A10 is using nearly 20 watts less than that Llano APU.  
-Why such an old benchmark with super pi?


----------



## NC37 (Mar 22, 2012)

Yeah exactly. Less than 4GB with a Llano was considered nuts.


----------



## JKnows (Mar 22, 2012)

Super PI says 8GB Ram, just 3DMark did not recognized it correctly. By the way it looks very fast, surly faster than Ivy Bridge.


----------



## Vulpesveritas (Mar 22, 2012)

JKnows said:


> Super PI says 8GB Ram, just 3DMark did not recognized it correctly. By the way it looks very fast, surly faster than Ivy Bridge.


Look at the allocated memory though.  Seems odd.  
Oh speculation quick- the latest rumors i've seen are saying the A10 will have 384 Radeon cores at 800mhz.  I don't see how that could be VLIW4, especially as I don't see how they could run hybrid Xfire with it.

However if those values are correct, then it could easily be GCN, given it would be the same clock as a 7750, with 75% of the cores, and with 2133mhz RAM you would have the same memory bandwidth.  

Still doesn't explain the L1 cache, and wondering how TDP-TDP it does against llano with it using ~15% less power.  Lower voltage. 
Looks like IPC is almost that of Llano, though not quite.  But wondering again on that L1...
...
so it may be real.  not sure.


----------



## Lionheart (Mar 22, 2012)

Sounds like an awesome APU if the price is right


----------



## zomg (Mar 22, 2012)

Vulpesveritas said:


> -Why does the A10 system only have 2GB of RAM when the Llano system have 8GB?



system with Llano was with 32-bit OS
system with Trinity was with 64-bit OS
(see bottom of CPU-Z window)


----------



## Vulpesveritas (Mar 22, 2012)

zomg said:


> system with Llano was with 32-bit OS
> system with Trinity was with 64-bit OS
> (see bottom of CPU-Z window)



Then why does the llano A8 say 8gb RAM, but the Trinity A10 only say 2gb?.. 
something seems off with that.


----------



## Sihastru (Mar 22, 2012)

Lol, "Trinity Devastator Desktop", devastatingly sluggish... unfortunately it looks like a higher clocked Bulldozer with slightly better (CGN) graphics.

So Trinity/Piledriver will not be AMD's savior.


----------



## xenocide (Mar 22, 2012)

I was expecting more.  The upgraded GPU is nice, but the CPU is barely better considering how much higher it's clocked...


----------



## Vulpesveritas (Mar 22, 2012)

xenocide said:


> I was expecting more.  The upgraded GPU is nice, but the CPU is barely better considering how much higher it's clocked...



It's better than bulldozer... and it's more power efficient.


Sihastru said:


> Lol, "Trinity Devastator Desktop", devastatingly sluggish... unfortunately it looks like a higher clocked Bulldozer with slightly better (CGN) graphics.
> 
> So Trinity/Piledriver will not be AMD's savior.


Devastatingly sluggish?  once again, faster than Llano, more power efficient.  And let's not forget GCN's GPGPU performance and the possibility of offloading x86 instructions to it.  
Not to mention AMD's version of quicksync, visual enhancements, and oh look unlocked multipliers.  And it will probably be priced less than $150.


----------



## JKnows (Mar 22, 2012)

zomg said:


> system with Llano was with 32-bit OS
> system with Trinity was with 64-bit OS
> (see bottom of CPU-Z window)



Do you mean:
system with Llano was with 64-bit OS
system with Trinity was with 32-bit OS ??

That is cool, adding 64bit OS to Trinity we can see more 3DMark points and even more points using Windows 8. (Trinity optimized for Windows 8).


----------



## Ikaruga (Mar 22, 2012)

Those are very good IGP scores indeed. 

SM2.0 Score: 3285 ?
HDR/SM3.0 Score: 4067 ??

That thing almost doubles the scores with high detail and high res enabled compared to what they had with the Llano, and it can also run 3 FullHD displays?. Very impressive.


----------



## Sihastru (Mar 22, 2012)

Vulpesveritas said:


> It's better than bulldozer... and it's more power efficient.
> 
> Devastatingly sluggish?  once again, faster than Llano, more power efficient.  And let's not forget GCN's GPGPU performance and the possibility of offloading x86 instructions to it.
> Not to mention AMD's version of quicksync, visual enhancements, and oh look unlocked multipliers.  And it will probably be priced less than $150.



No one is complaining for the lack of "features". But there are a few problems. It's not really any better then BD, there's a sizeable clockspeed difference there that explains the slight boost in performance (and not in a flattering way).

As for GPGPU, I'm still waiting for something useful other then video encoding to leverage it. If you're telling me that it will offload x86 instruction set to GCN, I'm gonna start laughing. Do you even believe that or you just wanted to make a short list longer?

I will attack even the "unlocked" part, at a turbo speed of 4.2 GHz, overclocking isn't really needed.

We'll talk pricing when I see it on the shelves. Lately AMD doesn't have a good track record when it comes to pricing.

It does however have a really nice GPU in it. It's its _only_ saving grace.


----------



## NC37 (Mar 22, 2012)

Well, can only go so far with simple synthetic benches. Guess we'll be seeing more real world ones within the next month. APUs weren't meant to be speed machines. If they were, they'd have more functionality intact from the CPUs they are based on. Ultimately AMD's savior is graphics. ATI's legacy will keep them afloat.

But an APU of Trinity caliber for the same price range as Llano would just be killer. There is no way Intel will be able to beat that for awhile in anything other than CPU performance. $500 laptop that can only run CPU tasks well, or a $500 laptop that runs heavier GPU stuff with a hit to CPU. I'll take the GPU laptop any day. CPU intensive tasks, just use a tower, or wait a few mins longer for the tasks to complete.


----------



## nt300 (Mar 22, 2012)

Sihastru said:


> Lol, "Trinity Devastator Desktop", devastatingly sluggish... unfortunately it looks like a higher clocked Bulldozer with slightly better (CGN) graphics.
> 
> So Trinity/Piledriver will not be AMD's savior.


You couldn't be more wrong. This is not a complete finished product yet. The finish Trinity CPUs should offer about 30% performance increase in CPU against the LIano and upto 50% in graphics performance increase against LIano. 
AMDs Trinity is in competition with LIano (Not Intel) and it offers high performance improvement as it stands and should increase as it nears release.


----------



## Sihastru (Mar 22, 2012)

How can a product be in competition with the product it replaces. Outside of a few weeks where you might see both in reviews, the old product will be phased out.


----------



## Completely Bonkers (Mar 22, 2012)

OK, performance isnt brilliant - but - it is a damn sight better than Atom. Remember Atom 3 years ago with 230 then 330. Then 525 now 2700. Over three years performance has increased, what, 20%.  Shame on Intel. (low power entry level).


----------



## nt300 (Mar 22, 2012)

Sihastru said:


> How can a product be in competition with the product it replaces. Outside of a few weeks where you might see both in reviews, the old product will be phased out.


No you right, didn't mean to phrase that way. '
Th only competition AMD has against Intel is the GPU part of the APUs. It should also take about 4 to 6 months for Trinity to replace LIano. This is why I say it will compete with each other. AMD need to strategically price Trinity so they can push as many of them through the channel. AMD has great opportunity with Trinity to gain back market share. 

The Trinity APUs out and about are based on engineering samples. The stepping revision should outperform better.


----------



## Edgarstrong (Mar 22, 2012)

Do you think I can use this new APU in a HTPC that will be used for Blu-Ray videos most of the time and skip graphics card?


----------



## meirb111 (Mar 22, 2012)

once more zero gain in preformance per clock!


----------



## blibba (Mar 22, 2012)

Edgarstrong said:


> Do you think I can use this new APU in a HTPC that will be used for Blu-Ray videos most of the time and skip graphics card?



Of course...

A solution like this is very much overkill for Blu Ray playback.


----------



## Dent1 (Mar 22, 2012)

meirb111 said:


> once more zero gain in preformance per clock!



How do you kow that?

We'll have to wait for official reviews to show the A8-3850 and A10-5800K  at matching clock speeds. Thus, performance per clock results. Until then you are guessing.


----------



## bencrutz (Mar 22, 2012)

looks nice to me.
but yeah, haters gona hate


----------



## Andy77 (Mar 22, 2012)

JKnows said:


> Super PI says 8GB Ram, just 3DMark did not recognized it correctly. By the way it looks very fast, surly faster than Ivy Bridge.



Where did you learn to count bytes? Those are 8 *Mega Bytes* of RAM used by SuperPI to make all the calculations possible. It even says "Allocated", which means a chunk out of the total given for the application to use.



zomg said:


> system with Llano was with 32-bit OS
> system with Trinity was with 64-bit OS
> (see bottom of CPU-Z window)



It's backwards...
A10, CPU-Z x32, 2,5 GB RAM
A8, CPU-Z x64, 7,6 GB RAM


----------



## meirb111 (Mar 22, 2012)

Dent1 said:


> How do you kow that?
> 
> We'll have to wait for official reviews to show the A8-3850 and A10-5800K  at matching clock speeds. Thus, performance per clock results. Until then you are guessing.





you didnt read  here is quote: not so much CPU (taking into account A10-5800K cores were clocked over 30% higher than those of the A8-3850).


----------



## Aquinus (Mar 22, 2012)

Andy77 said:


> Where did you learn to count bytes? Those are 8 *Mega Bytes* of RAM used by SuperPI to make all the calculations possible. It even says "Allocated", which means a chunk out of the total given for the application to use.
> 
> 
> 
> ...



SuperPi is also a 32-bit application, so it can only see up to 2gb of memory, which is exactly how much memory SuperPi sees in that case. There is something else going on here though. If trinity is running 32-bit OS, the GPU could be mapping 1.5gb for video which would only leave 2.5gb for everything else. Then SuperPi once again, only sees 2gb because it is still 32-bit.


----------



## Super XP (Mar 22, 2012)

meirb111 said:


> you didnt read  here is quote: not so much CPU (taking into account A10-5800K cores were clocked over 30% higher than those of the A8-3850).


We will never know until the product is properly reviewed and released. Until then, this is all speculation. 
AMD did state a 30% CPU performance improvement with Trinities Piledriver cores versus current LIano. The GPU should offer about 50% to 60% says AMD.


----------



## Andy77 (Mar 22, 2012)

Aquinus said:


> SuperPi is also a 32-bit application, so it can only see up to 2gb of memory, which is exactly how much memory SuperPi sees in that case. There is something else going on here though. If trinity is running 32-bit OS, the GPU could be mapping 1.5gb for video which would only leave 2.5gb for everything else. Then SuperPi once again, only sees 2gb because it is still 32-bit.



I was hinting that the user saw an 8-ish figure and guessed it's the total amount of system RAM when those are only 8MB of memory used by the app. It's easy to mistake bln with mln, and end up to GB when thinking about it.

As to your suggestion, here's my competition: that 32 bit system actually has only 2GB of RAM. 3Dmark sees 2GB, SupertPI sees 2GB... it's not hard to figure Trinity ran on 2GB of RAM.

Oh, and UMA can't be larger than 512MB. The BIOS/UEFI doesn't allow it and by default it takes up 256MB or 512MB if large amounts of RAM is detected. So... no, it can't share 1,5GB for video.

On the 64 bit system... SuperPI isn't able to detect the amount of RAM, mainly because the variable that stores that value is of int(32) type and its max value unsigned is 4,29 bln, when it needed a double to store the value of 8 bln bytes, the amount of RAM the Llano system has. It has nothing to do with actual type of app, even if it's 32-bit, it still relies on the os core functions to get those values.


----------



## Aquinus (Mar 22, 2012)

Andy77 said:


> As to your suggestion, here's my competition: that 32 bit system actually has only 2GB of RAM. 3Dmark sees 2GB, SupertPI sees 2GB... it's not hard to figure Trinity ran on 2GB of RAM.



Then why does 3D Mark say the system has 2.5gb of memory? Maybe 3Gb with 512Mb used for video?



Andy77 said:


> Oh, and UMA can't be larger than 512MB. The BIOS/UEFI doesn't allow it and by default it takes up 256MB or 512MB if large amounts of RAM is detected. So... no, it can't share 1,5GB for video.



Did not know that, I thought it allowed for more. However it can share 1.5gb of memory, it just doesn't use memory-mapped I/O for that if you're running a DX application. It will swap pages in and out of video memory if there isn't enough space. That is why if you look at DXDiag, you will notice that the "available video memory" will exceed the amount on (or in this case, allocated to) the video card.

Edit: Here, this is what I mean. Keep in mind my 6870s have only 1gb of on-board video memory and before someone says that is how much system memory that is available, I had 16Gb with something like 13gb or 12gb free. The same thing happened when I only had 8gb on my last build.


----------



## Dent1 (Mar 22, 2012)

meirb111 said:


> you didnt read  here is quote: not so much CPU (taking into account A10-5800K cores were clocked over 30% higher than those of the A8-3850).



I saw the quotes. But there is no evidence to sugguest the A10-5800k wouldnt of performed similarly at a lower clock.


----------



## Andy77 (Mar 22, 2012)

Dent1 said:


> I saw the quotes. But there is no evidence to sugguest the A10-5800k wouldnt of performed similarly at a lower clock.



BD/PD need higher clock to achieve normal IPC... but this also makes the clock by clock point useless. Because Husky has internally a shorter instruction pipeline compared to PD, it's normal for the chip to be clocked lower. Try to clock it PD levels and see instability issues. Not the same for PD. Because its pipeline is longer it can handle a higher clock by default to achieve the same performance. The main difference is that internally, the more instructions you give a Husky to do at one time the more it will choke, while on PD, because of its longer pipeline, it will handle "crowded" situations better.


----------



## Vulpesveritas (Mar 22, 2012)

Edgarstrong said:


> Do you think I can use this new APU in a HTPC that will be used for Blu-Ray videos most of the time and skip graphics card?


It's overkill, but yes you can.  You can also probably use a number of emulators without issue given the GPU.


meirb111 said:


> you didnt read  here is quote: not so much CPU (taking into account A10-5800K cores were clocked over 30% higher than those of the A8-3850).


There may be diminishing returns on performance / clock with trinity, we don't know.
Also, if this is real, it's pre-production silicon and therefore is unlikely to perform as well as what we'll see in retail.



Sihastru said:


> No one is complaining for the lack of "features". But there are a few problems. It's not really any better then BD, there's a sizeable clockspeed difference there that explains the slight boost in performance (and not in a flattering way).
> 
> As for GPGPU, I'm still waiting for something useful other then video encoding to leverage it. If you're telling me that it will offload x86 instruction set to GCN, I'm gonna start laughing. Do you even believe that or you just wanted to make a short list longer?
> 
> ...


Umm... if it is GCN, then this;  http://en.wikipedia.org/wiki/Southern_Islands_(GPU_family)
"Support for x86 addressing with unified address space for CPU and GPU"
http://www.anandtech.com/show/4455/amds-graphics-core-next-preview-amd-architects-for-compute/6
"In terms of base features the biggest change will be that GCN will implement the underlying features necessary to support C++ and other advanced languages. As a result GCN will be adding support for pointers, virtual functions, exception support, and even recursion. These underlying features mean that developers will not need to “step down” from higher languages to C to write code for the GPU, allowing them to more easily program for the GPU and CPU within the same application. For end-users the benefit won’t be immediate, but eventually it will allow for more complex and useful programs to be GPU accelerated."


----------



## Andy77 (Mar 22, 2012)

Aquinus said:


> Maybe 3Gb with 512Mb used for video?



That might be it.




Aquinus said:


> However it can share 1.5gb of memory, it just doesn't use memory-mapped I/O for that if you're running a DX application.



It can share, but I believe that is done dynamically. When using standard applications, and 3DMark is partially just that, the shared RAM is included in the main system RAM displayed while the "dedicated" RAM is subtracted.

Anyway, by the looks of it, it's 3GB vs 8GB of TOTAL system RAM. I don't know if 3DMark would be affected by it, it's still a lot of RAM for just one benchmark application and it still has 512 MB of exclusive RAM for video.




Vulpesveritas said:


> For end-users the benefit won’t be immediate, but eventually it will allow for more complex and useful programs to be GPU accelerated."



What I was thinking since I first saw the specs pop up in the wild... imagine if consoles will use GCN. It will allow devs to code better engines for all platforms and get rid of the "it's a cosnole port" stench.


----------



## trickson (Mar 22, 2012)

WOW that is SWEET!! Really looks great! Got to say that AMD Has one hell of an APU there! WOW fantastic!


----------



## faramir (Mar 22, 2012)

Andy77 said:


> BD/PD need higher clock to achieve normal IPC...



w00tL0L ??? Spare us the nonsense if you don't know what you're talking about. Thank you.


----------



## Dent1 (Mar 22, 2012)

Andy77 said:


> BD/PD need higher clock to achieve normal IPC.



What is a normal IPC? I googled "Normal IPC" and I couldnt find anything!

Who are you to decide what a "normal IPC" is?


----------



## trickson (Mar 22, 2012)

faramir said:


> w00tL0L ??? Spare us the nonsense if you don't know what you're talking about. Thank you.





Dent1 said:


> What is a normal IPC? I googled "Normal IPC" and I couldnt find anything!
> 
> Who are you to decide what a "normal IPC" is?



Can we just stay on topic for once? Please lets stop all this stuff. 
I think this is really sweet! Really! AMD has one kickass APU! Fantastic job AMD!


----------



## Vulpesveritas (Mar 22, 2012)

So I'm wondering how it will fare with 2133mhz RAM as it's memory controller is supposed to support it, and AMD should be coming out with some low profile heatsink 2133mhz Radeon branded RAM before Trinity comes out.  
Then too... wondering whether it is VLIW4 as originally said or GCN as it would appear to be now with the "384 radeon cores" statement which would suggest GCN, especially given the A8-3850 has 400 VLIW5 shader units.  
Also I wonder how it overclocks as given it's 15% more energy efficient than Llano it would appear.


----------



## JMccovery (Mar 22, 2012)

Vulpesveritas said:


> Look at the allocated memory though.  Seems odd.
> Oh speculation quick- the latest rumors i've seen are saying the A10 will have 384 Radeon cores at 800mhz.  I don't see how that could be VLIW4, especially as I don't see how they could run hybrid Xfire with it.
> 
> However if those values are correct, then it could easily be GCN, given it would be the same clock as a 7750, with 75% of the cores, and with 2133mhz RAM you would have the same memory bandwidth.
> ...



I think it is VLIW4. Evergreen, Barts, Turks, Caicos and Llano were VLIW5: a shader block or 'core' contained 16 5-way ALUs (80); whereas with Cayman, a 'core' contains 16 4-way ALUs (64); which Trinity has 6 (6x64=384).


----------



## Vulpesveritas (Mar 22, 2012)

JMccovery said:


> I think it is VLIW4. Evergreen, Barts, Turks, Caicos and Llano were VLIW5: a shader block or 'core' contained 16 5-way ALUs (80); whereas with Cayman, a 'core' contains 16 4-way ALUs (64); which Trinity has 6 (6x64=384).



The thing that i don't understand about that is what the heck could it xfire with?  Not to mention i'm uncertain how we would be seeing a near 50% increase in graphics performance using 384 vliw4 shaders vs 400 vliw5 shaders.  

Also, look at Southern Islands, specifically Cape Verde- the 7750 has 512 cores.  384 is exactly 75% of 512, and they're clocked at 800mhz, the same as the cores in the 7750.  Not to mention the similarity in the memory data rate - using the 2133mhz memory controller the memory bandwidth would be the same between the two graphics processors.  And given that it would easily xfire with the 7750.  

Also remember Trinity was originally slated for q1 2012 release - it's been pushed back, and not for manufacturing issues.  If I were to guess, AMD would push GCN for improved performance overall and push 'fusion' faster.


----------



## Dent1 (Mar 22, 2012)

trickson said:


> Can we just stay on topic for once? Please lets stop all this stuff.
> I think this is really sweet! Really! AMD has one kickass APU! Fantastic job AMD!



Well I don't see you quoting Andy77 whom steered the topic away with his IPC talk.

And yes fantastic APUs indeed.


----------



## mastrdrver (Mar 22, 2012)

The GPU is VLIW4. If we assume that even the Trinity CPU part is slower then the Llano CPU part, it is encouraging that the GPU score was increase by so much. This overall look though does bode well for the rumor that 17w Trinity equals 25/35w Llano laptop parts.

I wish the IB 3DM 06 scores had been leaked in the other thread.



Andy77 said:


> BD/PD need higher clock to achieve normal IPC...



That is not a fact but an assumption. Since they did not clock Llano and Trinity the same, it's impossible to tell if the IPC went up, down, or stayed the same......or as you put it achieved "normal IPC". What ever that means.


----------



## Drac (Mar 22, 2012)

How much faster than a hd 5770 would it be? I just want to replace my entire system and it would be good to forget to buy a new gpu in this times.


----------



## Vulpesveritas (Mar 22, 2012)

Drac said:


> How much faster than a hd 5770 would it be? I just want to replace my entire system and it would be good to forget to buy a new gpu in this times.



"faster than an HD 5770?"  given that the llano GPU is slightly slower than a 6570... and that this is considered a 7660, it will be a slightly slower GPU than the 5770.  And if it's VLIW4 it probably won't be able to do any xfire either.  
If it's GCN though you get to xfire it with a 7750. lol


----------



## JMccovery (Mar 22, 2012)

Vulpesveritas said:


> The thing that i don't understand about that is what the heck could it xfire with?  Not to mention i'm uncertain how we would be seeing a near 50% increase in graphics performance using 384 vliw4 shaders vs 400 vliw5 shaders.
> 
> Also, look at Southern Islands, specifically Cape Verde- the 7750 has 512 cores.  384 is exactly 75% of 512, and they're clocked at 800mhz, the same as the cores in the 7750.  Not to mention the similarity in the memory data rate - using the 2133mhz memory controller the memory bandwidth would be the same between the two graphics processors.  And given that it would easily xfire with the 7750.
> 
> Also remember Trinity was originally slated for q1 2012 release - it's been pushed back, and not for manufacturing issues.  If I were to guess, AMD would push GCN for improved performance overall and push 'fusion' faster.



I don't think backporting the GCN design to the 32nm process would be feasible, it would be 'easier' (for AMD not GF) to shrink a Northern Islands derivative to 32nm. GCN will make its APU debut in Kaveri, Kabini and Tamesh later next year.


----------



## Vulpesveritas (Mar 22, 2012)

JMccovery said:


> I don't think backporting the GCN design to the 32nm process would be feasible, it would be 'easier' (for AMD not GF) to shrink a Northern Islands derivative to 32nm. GCN will make its APU debut in Kaveri, Kabini and Tamesh later next year.



So was the original plan.  But also in the original plan was that Piledriver would have a 10 core processor aka komodo and AMD decided to change that.  Simple thing is we won't know 100% for sure until AMD releases the processor.  I hope that it is GCN so we can xfire it, because otherwise I don't see quite as much in the way of value compared to the last generation, especially when the desktop prices drop, when we can get a used 6570 and xfire it with a 3870k and pretty much get the same performance for less.  Sure trinity is more power efficient but eh.


----------



## mastrdrver (Mar 22, 2012)

Changing from 10 cores to 8 is orders to magnitude easier then porting a architecture designed for bulk over to SOI.

If want to use the car analogy, it would be similar to adding another engine option to a car and say that is just as easy as moving the car to an entirely different facility for assembly one of which it was never designed to go down. It's possible, but would take a lot of engineering resources to accomplish.


----------



## Drac (Mar 22, 2012)

Vulpesveritas said:


> "faster than an HD 5770?"  given that the llano GPU is slightly slower than a 6570... and that this is considered a 7660, it will be a slightly slower GPU than the 5770.  And if it's VLIW4 it probably won't be able to do any xfire either.
> If it's GCN though you get to xfire it with a 7750. lol



I got a new hd 5770 for 140 euros almost 2 years ago, i want to spend more or less the same money for a new card that is more efficient and faster than the 5770, but its impossible in this days, and i wont spend 150 euros again after 2 years for the same performance, taking this cpu+gpu was my hope to do the trick


----------



## MikeMurphy (Mar 22, 2012)

Drac said:


> How much faster than a hd 5770 would it be? I just want to replace my entire system and it would be good to forget to buy a new gpu in this times.



MUCH MUCH SLOWER than a 5770.  Your 5770 is still a really good card.  Keep it.

I recently added a 5750 to my a8-3850.  The difference is substantial.


----------



## MikeMurphy (Mar 22, 2012)

Andy77 said:


> Oh, and UMA can't be larger than 512MB. The BIOS/UEFI doesn't allow it and by default it takes up 256MB or 512MB if large amounts of RAM is detected. So... no, it can't share 1,5GB for video.



This is wrong.  The video memory is typically user selectable.  My a75m-ud2h was set to 1gb.


----------



## Vulpesveritas (Mar 22, 2012)

MikeMurphy said:


> MUCH MUCH SLOWER than a 5770.  Your 5770 is still a really good card.  Keep it.
> 
> I recently added a 5750 to my a8-3850.  The difference is substantial.



'much, much slower?"  The IGP in the Llano A8 is about half the speed of the 5750 without OC.  The trinity A10 will be about 75% as fast without OC.  Just below the speed of a 6670 if you are running DDR3-2133.  (this is a rough estimated guess.)


----------



## Andy77 (Mar 23, 2012)

MikeMurphy said:


> This is wrong.  The video memory is typically user selectable.  My a75m-ud2h was set to 1gb.



Big words... for such a small reply.

I'm partially correct. It all depends on the mainboard and BIOS. The ASRock a75ex6 doesn't allow for more than 512 MB and this limit has been in place for a long long time on a lot of other integrated graphics mainboards. I suppose a manufacturer can go out of spec and increase that value if it codes its own BIOS. No fault if it does so, but for the system in question, both are missing 512MB from the total RAM rounded to a typical number.



mastrdrver said:


> That is not a fact but an assumption. Since they did not clock Llano and Trinity the same, it's impossible to tell if the IPC went up, down, or stayed the same......or as you put it achieved "normal IPC". What ever that means.





Dent1 said:


> What is a normal IPC? I googled "Normal IPC" and I couldnt find anything!
> 
> Who are you to decide what a "normal IPC" is?





faramir said:


> w00tL0L ??? Spare us the nonsense if you don't know what you're talking about.




I didn't know TPU forum has such sensitive... guys. If I can call you that.
You might want to try replying with some maturity if you want a decent reply.

It was a mistake on my part... I was thinking at IPS and wrote IPC, even if it wasn't to far off of the meaning.

Each product is placed on the level at which it is marketed based on TDP. From there they refine the placement based on slight clock differences. What I'm saying is that for the same TDP or lower the Trinity based APU can clock higher than the Llano APU, at the same TDP level. In the binning process, the Trinity APU meets the desired parameters for the product placement at a higher clock, where in the same process the Llano only met them at a lower clock. That's what I've meant by "normal" IPS, a relative performance to it's place among the other products after binning. And this is not "speculation" or "me being someone", it's how chips are placed. Those of the Trinity chips which won't support the clocks above will be get lower clocks, IPS drops, and will be placed in a lower category of products.

My point: Complaining about the A10 being higher clocked than the A8 has no real meaning... other than complaining.

@Dent1, the evidence is in BD's benchmarks, which PD is based on, unless they redid a 5 year old uarch over night /s. Lower the clocks and see it barely catching up to it's older brother. And try to use your common sense more, instead of GoOgle.


----------



## trickson (Mar 23, 2012)

Andy77 said:


> I didn't know TPU forum has such sensitive... guys. If I can call you that.
> You might want to try replying with some maturity if you want a decent reply.


There are a few here that are very sensitive, You really have to watch it here. 

Maturity? Not going to happen. Take what you can get and run. 

Try not to mention you know what  and this will not be an issue.
Case in point never use the "B" word http://www.techpowerup.com/forums/showthread.php?t=162689&page=2


----------



## mastrdrver (Mar 23, 2012)

Andy77 said:


> I didn't know TPU forum has such sensitive... guys. If I can call you that.
> You might want to try replying with some maturity if you want a decent reply.
> 
> It was a mistake on my part... I was thinking at IPS and wrote IPC, even if it wasn't to far off of the meaning.
> ...



IPS? Isn't that a display panel tech?


----------



## xenocide (Mar 23, 2012)

mastrdrver said:


> IPS? Isn't that a display panel tech?



I assume he did actually mean IPC, but didn't really know what it meant or why it was important.  I assume he meant Trinity needed an IPC on par for Intel offerings, which I don't necessarily agree with.


----------



## Vulpesveritas (Mar 23, 2012)

xenocide said:


> I assume he did actually mean IPC, but didn't really know what it meant or why it was important.  I assume he meant Trinity needed an IPC on par for Intel offerings, which I don't necessarily agree with.



Exactly, it all comes down to performance / watt here.  50% faster GPU and 20% faster CPU while using 15% less power, on test silicon which isn't even as good as production silicon, and using a 32bit OS instead of a 64bit OS which reduces how much RAM can be used on top of that.  

That in my books is a definite improvement overall.  It's achieving higher CPU performance while using less power than Llano.  Period.  End-of-line.  IPC is no good if your clocks are 10mhz, and clocks aren't any good if your IPC sucks.  It comes down to what is the best IPC+clocks you can get / watt, and Trinity is apparently a decent improvement in those books over the Llano STARS core.  Good to see the modules are actually doing what they were meant to do in the first place - be more power efficient. 

I'm also wondering how the 65w A10 will do XD.


----------



## Andy77 (Mar 23, 2012)

mastrdrver said:


> IPS? Isn't that a display panel tech?



 I/s = IPC x Clock

I retract the "use common sense" suggestion... stick to Google, works better for ya! 

*trickson*, ok... thing is I can't take anything out of this, it's all worthless. So I prefer to walk away.


----------



## Super XP (Mar 23, 2012)

People need to grow a thick skin, screw this sensitivity crap.


----------



## trickson (Mar 23, 2012)

Super XP said:


> People need to grow a thick skin, screw this sensitivity crap.



The problem is this will never happen, As time goes on we have a tendency to coddle people more and more. Just look at how things were 15-20 even 30 years ago. Now look every kid gets an award for every thing! No matter how bad they suck they get praised and told how great they are. There is no real ownership for any behavior. There was a time when you were told to stand up to the bully in school punch him in the face! Now you are told to run away and tell an adult, We called them tattle tails! WOW just how far we have gone over the edge!


----------



## BeepBeep2 (Mar 23, 2012)

trickson said:


> The problem is this will never happen, As time goes on we have a tendency to coddle people more and more. Just look at how things were 15-20 even 30 years ago. Now look every kid gets an award for every thing! No matter how bad they suck they get praised and told how great they are. There is no real ownership for any behavior. There was a time when you were told to stand up to the bully in school punch him in the face! Now you are told to run away and tell an adult, We called them tattle tails! WOW just how far we have gone over the edge!


You went from heavy intel fanboy to heavier sarcastic AMD fanboy  I apologize for the vulgarity and personal attacks in my post towards you in that other thread however your blatant fanboyism/partiality was a bit more out of line than the other guys. IMO, your first comments there were far from mature, and you should be the last person to talk about maturity... Lets be real, speak the truth in whole...don't speak 20% of what you want to speak and skew it to make it seem the truth. I personally am disappointed in the CPU performance of this APU. However it is totally absent of L3 cache, so maybe Piledriver will do a bit better in CPU-only form. Still, I doubt AMD will improve much. 5-10% performance per clock over Bulldozer coupled with a little OC headroom in the same power envelope would surprise me. Still, they are* way* behind intel's offerings in performance per watt.

As far as speaking about kids and bullies in elementary, middle, and high school...it is still that way. I am a junior in high school...I definitely grew up in the "stand up for yourself despite it getting you suspended" environment. However most of the kids grow up by the time they hit high school. In fact, law enforcement gets involved, because schools are monitored with cameras and any fights or backlash occur out in the open. Kids are threatened with expulsion even if they talk about doing such things. Also, as far as the behavior outside that, it isn't just the coddling, but kids now have instant access to an infinite amount of free information. Unfortunately some of the children don't learn to do things themselves (what you would call a spoiled child, who is all about "me me me" and doesn't even know how to clean up after himself or meet a deadline), while other children take advantage of the resources today's age has given them and succeed and exceed higher than any generation before them. By the way, it is the adults that grew up 15-20, even 30 years ago that are influencing and teaching the children of today.  It's not a personal dig this time, just my opinion on the topic. I will be 17 soon, if you'd like to age discriminate


----------



## trickson (Mar 23, 2012)

Oh the youth of today! If only they really had respect for there elders. I have kids that are older than most of you and grand kids now. What has this world come to?
AMD is doing well in both there CPU and APU lines, They are not that bad and can keep up really well. My honest opinion is AMD is going to continue to provide us all with CPU's, APU's and GPU's we will all buy.


----------



## Vulpesveritas (Mar 23, 2012)

BeepBeep2 said:


> You went from heavy intel fanboy to heavier sarcastic AMD fanboy  I apologize for the vulgarity and personal attacks in my post towards you in that other thread however your blatant fanboyism/partiality was a bit more out of line than the other guys. IMO, your first comments there were far from mature, and you should be the last person to talk about maturity... Lets be real, speak the truth in whole...don't speak 20% of what you want to speak and skew it to make it seem the truth. I personally am disappointed in the CPU performance of this APU. However it is totally absent of L3 cache, so maybe Piledriver will do a bit better in CPU-only form. Still, I doubt AMD will improve much. 5-10% performance per clock over Bulldozer coupled with a little OC headroom in the same power envelope would surprise me. Still, they are* way* behind intel's offerings in performance per watt.
> 
> As far as speaking about kids and bullies in elementary, middle, and high school...it is still that way. I am a junior in high school...I definitely grew up in the "stand up for yourself despite it getting you suspended" environment. However most of the kids grow up by the time they hit high school. In fact, law enforcement gets involved, because schools are monitored with cameras and any fights or backlash occur out in the open. Kids are threatened with expulsion even if they talk about doing such things. Also, as far as the behavior outside that, it isn't just the coddling, but kids now have instant access to an infinite amount of free information. Unfortunately some of the children don't learn to do things themselves (what you would call a spoiled child, who is all about "me me me" and doesn't even know how to clean up after himself or meet a deadline), while other children take advantage of the resources today's age has given them and succeed and exceed higher than any generation before them. By the way, it is the adults that grew up 15-20, even 30 years ago that are influencing and teaching the children of today.  It's not a personal dig this time, just my opinion on the topic. I will be 17 soon, if you'd like to age discriminate



Well, let me point out that we have a chip just 400mhz slower than this one running at 65 watts.  VS an i3 with a smaller IGP.  AMD is catching up, but they are a generation behind.  However, unlike said i3, you have a decent IGP.  lol.  
I mean, if that is accurate this actually may be closer to a 95/100w part than the old llano. 15% higher power efficiency while having a 20% faster CPU and a 50% faster GPU.  
that's a pretty big leap for one generation seeing as an IB i3 vs a SB i3 where the ivy bridge processor is somewhere around 20% more power efficient and there is a 10% faster CPU and a somewhat faster GPU which still can't beat an AMD A4's integrated graphics.


----------



## BeepBeep2 (Mar 24, 2012)

trickson said:


> Oh the youth of today! If only they really had respect for there elders. I have kids that are older than most of you and grand kids now. What has this world come to?
> AMD is doing well in both there CPU and APU lines, They are not that bad and can keep up really well. My honest opinion is AMD is going to continue to provide us all with CPU's, APU's and GPU's we will all buy.


Honestly, this is an open forum...and you are not my father, you are simply another person living on this earth. If you take a look at some of your own posts recently, they've been pretty immature, and way before I said anything to you. By the tone of your comments, you are the kind of person that likes to kick others while they are down. You also blame the children when the adults created this environment. Maybe humanity should just stop reproducing, so their new generations of worsening children will never exist. Am I wrong? I am open minded, and respect those people who deserve respect. I'm sure you know how to be a gentleman, and I try my best to be too. 

In my opinion, which I am allowed to voice just as much as you, AMD is doing great with their APU line. However, Bulldozer has sucked. Their APU line is great for low power applications and entry/mainstream, but they are simply unable to compete in upper midrange and high end. Kepler poses some problems for GCN architecture too.


----------



## trickson (Mar 24, 2012)

BeepBeep2 said:


> Honestly, this is an open forum...and you are not my father, you are simply another person living on this earth. If you take a look at some of your own posts recently, they've been pretty immature, and way before I said anything to you. By the tone of your comments, you are the kind of person that likes to kick others while they are down. You also blame the children when the adults created this environment. Maybe humanity should just stop reproducing, so their new generations of worsening children will never exist. Am I wrong? I am open minded, and respect those people who deserve respect. I'm sure you know how to be a gentleman, and I try my best to be too.
> 
> In my opinion, which I am allowed to voice just as much as you, AMD is doing great with their APU line. However, Bulldozer has sucked. Their APU line is great for low power applications and entry/mainstream, but they are simply unable to compete in upper midrange and high end. Kepler poses some problems for GCN architecture too.



Yeah you win! 

Honestly calling some one immature and really the first part of your post just proves my point.


----------



## Vulpesveritas (Mar 24, 2012)

BeepBeep2 said:


> In my opinion, which I am allowed to voice just as much as you, AMD is doing great with their APU line. However, Bulldozer has sucked. Their APU line is great for low power applications and entry/mainstream, but they are simply unable to compete in upper midrange and high end. Kepler poses some problems for GCN architecture too.



Well, bulldozer sucks mainly due to it's power inefficiency.  If it weren't for that, then it would be a success, as it was meant to "hold the line" of IPC and increase power efficiency and thread count.  Quite obviously it failed in the first iteration in two out of the three goals.  Piledriver is looking to be like what it was meant to be, while increasing clocks as well.  Given there's an A10 2m/4c/4t 3.6ghz 65w part with integrated graphics in the lineup.  
So the idea behind bulldozer was good... it's first implementation wasn't. If this is real, then IPC still doesn't seem to quite match Llano, but should be about on par with Deneb.  
On GCN's end, there is still speculation about on that the 79xx series under-performing based upon how the 78xx series performs.  And it is great at GPGPU- which is what AMD needs for it's future heterogeneous computing goals, where they can dump floating point operations onto the GPU part of the APU, as well as heavily multithreaded environments where one can run parallelization on the graphics part as well.  Not to mention GCN being able to naively run C++, which as both the PS3 and the Durango (aka xbox 720.) may very well have GCN graphics, we may see future games taking advantage of GPU compute.  While it doesn't amount much in the short term except for Bitcoin mining, web browsing, media players, and a faster UI in the OS, it is also still better at tessellation than past AMD graphics cards, and more energy efficient than anything but Kepler.  And it isn't really that much less power efficient than kepler.  5% less power efficient than the top end part and 5-15% slower depending on the game, although the 7970 is nearly twice as fast in GPGPU than Kepler.  So GCN is a better all-round architecture whereas Kepler is optimized for gaming.

And AMD has also managed to remain more ethical in their practices than their competition, which is a win in my books.


----------



## trickson (Mar 24, 2012)

Vulpesveritas said:


> And AMD has also managed to remain more ethical in their practices than their competition, which is a win in my books.



Or there just better at keeping things undercover. Ever thought of that? 

After all it took a long time for Apples missery to come out. Given enough time I am almost positive that there will be light shed on some skeletons soon, Every one has them.


----------



## BeepBeep2 (Mar 24, 2012)

trickson said:


> Yeah you win!
> 
> Honestly calling some one immature and really the first part of your post just proves my point.


Okay, you've said "Yeah you win!" enough times.
This has absolutely nothing about me winning on a personal level, or trying to be better than everyone on earth like a lot of kids do when they hit puberty, this is about a desire for people to at the very least, be level-headed. If you look back at what you said in the thread I called you out on, there is nothing out of line with my assumption on what kind of person you are. I don't know you in person though, of course.

Please, read the PM I sent you. I'm not trying to be rude or anything, but when adults know they speak to children, and speak to them like that, or when children enter a discussion to see an adult like you acting and/or speaking the way you did, (first impression says a lot!) there is no question why the children act the way they do.

On topic:
Can't wait to see how resonant clock mesh and core improvement help Trinity vs current desktop Bulldozer parts.


----------



## Vulpesveritas (Mar 24, 2012)

BeepBeep2 said:


> On topic:
> Can't wait to see how resonant clock mesh and core improvement help Trinity vs current desktop Bulldozer parts.



Yeah.  I wonder how it will OC with that resonant mesh.  

hmm...
I mean, it's 15% more power efficient than Llano too, and I wonder with that resonant mesh turning 10% of what heat you would be putting out as clocks... then wouldn't the more voltage you put on the thing increase your OCability by the more heat you're putting out...
...
And the Llano STARS core is more power efficient than Bulldozer...
so this will be interesting to see how piledriver ends up. That is of course, if this has any credibility to it.


----------



## Dent1 (Mar 26, 2012)

Andy77 said:


> @Dent1, the evidence is in BD's benchmarks, which PD is based on, unless they redid a 5 year old uarch over night /s. Lower the clocks and see it barely catching up to it's older brother. And try to use your common sense more, instead of GoOgle.



Nobody knows how long AMD has been working on Piledriver? For all we know they could of been working on Piledrivers refinements for years. We as consumers only know what AMD choose to tell us.  So your point is void.

Until Piledriver comes out. Stop guessing.


----------



## xenocide (Mar 27, 2012)

Dent1 said:


> Nobody knows how long AMD has been working on Piledriver? For all we know they could of been working on Piledrivers refinements for years. We as consumers only know what AMD choose to tell us.  So your point is void.
> 
> Until Piledriver comes out. Stop guessing.



AMD stated they expect a 10-15% performance gain with Piledriver.  I just don't see that happening.  He also wasn't saying it took them 5 years of work to get PD out, if that were the case they would have just released PD in the first place, he meant BD.  BD was originally announced back in I think 2007, and only appeared last fall.  It took them several years of work to get BD out, so I doubt within a year they could release PD (just a revision of BD) that would show massive gains.  I'd say they have probably been working on PD for 2 years at this point since they had the final design for BD ready at about that time (just were working on bugs after that point).


----------



## Vulpesveritas (Mar 27, 2012)

xenocide said:


> AMD stated they expect a 10-15% performance gain with Piledriver.  I just don't see that happening.  He also wasn't saying it took them 5 years of work to get PD out, if that were the case they would have just released PD in the first place, he meant BD.  BD was originally announced back in I think 2007, and only appeared last fall.  It took them several years of work to get BD out, so I doubt within a year they could release PD (just a revision of BD) that would show massive gains.  I'd say they have probably been working on PD for 2 years at this point since they had the final design for BD ready at about that time (just were working on bugs after that point).



"you just don't see that happening?" 

So... you didn't see Athalon 64 happening... Or phenom II, or Core 2, or Sandy bridge...

20% gains do happen from one year to the next.  Not as often as smaller gains, however Bulldozer was held back by a large number of small things rather than one huge thing.  
1. Modules were meant to increase performance / watt.  Clearly that didn't happen with BD, however there is no reason they can't pull it off.
2. Low speed cache.  One of the main complaints of BD, and one of the ones that I feel could easily have been fixed in PD.
3. Hand-made architecture tweaks.  To improve efficiency / performance more.  Probably happens with these architecture changes.
4.  Maybe they increased the front end's size so it can do better than Brazos / core?... 
5. Resonant Clock Mesh.  Converts ~10% of energy that would be wasted as heat into clock speed.

Quite likely PD is not -just- a bugfix / revision.  
Based on what I've read.

And AMD has stated they're shooting for a 15% increase + each year from now to 2015 in architecture performance.  Which isn't a lot when you factor in that Moore's law says it should move faster than that. 
(obviously it doesn't but eh it's hardly improbable for PD to do quite a bit better than BD.)

And wasn't Bulldozer's architecture designed to provide more performance in the long run?  Instead of sticking with a single architecture, making small tweaks here and there, and relying on die shrinks hoping it turns out better in the end?


----------



## xenocide (Mar 27, 2012)

For starters, nobody saw Athlon 64 coming.  Everyone figured it would be an improvement, but nothing like what they delivered.  Similarily, Core2 was a huge advancement, that few people saw coming.  I wouldn't put Phenom II up there since Phenom I was a flop, and PII pretty much just corrected it and properly implemented it.  As for SB, I was pretty certain it would be impressive, the thing that makes SB so desirable is the price though, not many saw them pricing a CPU that was on par for their last gen $1000 CPU at $200-300.

I'm sure AMD hopes for a 15% improvement, but they also said Bulldozer would be much better than Phenom II, and for a lot of things it is better, but is it better than if they had reworked the IMC and shrunk down Phenom II's design?  Phenom II vs. BD is a lot like P3 vs. P4.

The thing is, what AMD says they are trying to do, and what they actually do, are generally two very different things.  I mean, BD was intended to use less power but that clearly didn't happen.  As for Resonant Clock Mesh, I see nowhere that is "converts" 10% of heat into clock speed.  I've seen that it can reduce power consumption by up to 30% (according to Cyclos, the company that designed it--actual PD CPU's are said to have 24% lower consumption) and increase overall performance by 5-10% because you have access to higher clocks and more power.  Something to consider, is that Resonant Clock Mesh is also going to require space on die, because it adds capacitors and inductors to the CPU.

Bulldozer was designed to be scalable, but the issue isn't scalability.  They can keep adding modules, and anything that can use those will see a performance gain, but their performance per thread is the issue.


----------



## Vulpesveritas (Mar 27, 2012)

xenocide said:


> For starters, nobody saw Athlon 64 coming.  Everyone figured it would be an improvement, but nothing like what they delivered.  Similarily, Core2 was a huge advancement, that few people saw coming.  I wouldn't put Phenom II up there since Phenom I was a flop, and PII pretty much just corrected it and properly implemented it.  As for SB, I was pretty certain it would be impressive, the thing that makes SB so desirable is the price though, not many saw them pricing a CPU that was on par for their last gen $1000 CPU at $200-300.
> 
> I'm sure AMD hopes for a 15% improvement, but they also said Bulldozer would be much better than Phenom II, and for a lot of things it is better, but is it better than if they had reworked the IMC and shrunk down Phenom II's design?  Phenom II vs. BD is a lot like P3 vs. P4.
> 
> ...




Simple thing is, we won't know till we see it.  
On topic with this thread- pertaining to those very same cores, looking at the power consumption, we see 99 watts at it's full clockspeed of 3.8ghz.  With an IGP using 30-50ish of those.  
So if this is real, then I believe AMD bumped up their game in overclockability.  Not to mention the 65w part at 3.6ghz.  

As far as why they didn't just shrink PII, I think this guy has it right;
"Bulldozer is performing badly mostly because of:
1) Combination of small L1 caches and slow L2 caches. This problem stays with piledriver.
2) L1 instruction cache aliasing problems and write-through L1 caches causing excessive L2 traffic. This problem stays with piledriver
3) The made couple of small mistakes somewhere and it cannot reach the clock speeds it was supposed to reach/what speeds most of it's pipeline would allow. Piledriver will fix this. 
4) To get full floating point performance, you have to use AMD's own FMA4 instructions. No legacy software uses those, and not all new software is going to use them because intel is not going to implement those same instructions. Piledriver is going to support Intel Haswell-compatible FMA3, so new code optimized to intel will give full fpu performance on piledriver, no need for amd-specific optimizations.
 12   7 [Posted by: hkultala  | Date: 02/22/12 09:30:26 PM]

K10 had reached it's age. Already Nehalem beat it badly, and there was no space for improvement in K10, there was too much legacy burden from K7, like lack of memory disambiguation, too tightly coupled ALU and AGU units, tomasulo style OOE instead of PRF-based OOE etc. 
And you cannot change these things in existing architecture, they had already changed everything that can be changed/improved between K7 and K10.

So quite many years ago AMD knew it needs a totally new architecture after these K7-derivates, and they developed bulldozer. It ended up being worse than expected, but most of the problems are with the implementation, not deeply in the architecture.

Now there is a lots of room for improvement by fixing those things that appeared to be bottlenecks in the design.
 7   1 [Posted by: hkultala  | Date: 02/23/12 05:42:39 PM]"
http://www.xbitlabs.com/news/cpu/di...stone_Thanks_to_Resonant_Clock_Mesh_Tech.html


Although I'm hoping AMD will have worked on the cache somewhat.  

Simple thing is, we won't know till the end product is released and we all hope for the sake of prices that AMD comes up with a good processor that can compete.  And it's not even implausible for them to.


----------



## Dent1 (Mar 27, 2012)

xenocide said:


> AMD stated they expect a 10-15% performance gain with Piledriver.  I just don't see that happening.  He also wasn't saying it took them 5 years of work to get PD out, if that were the case they would have just released PD in the first place, he meant BD.  BD was originally announced back in I think 2007, and only appeared last fall.  It took them several years of work to get BD out, so I doubt within a year they could release PD (just a revision of BD) that would show massive gains.  I'd say they have probably been working on PD for 2 years at this point since they had the final design for BD ready at about that time (just were working on bugs after that point).




It really doesn't matter whether they've been working on Piledriver for 2 years or 5 years or 20 years. My point is that until Piledriver is released we don't know how it'll perform. We can theorise based on Bulldozer's specification and AMDs history thus far, but its still an educated guess.  Nobody here, including Andy77 has any business saying that an unreleased processor will have X or Y performance with 100% certainty.


----------



## devguy (Apr 3, 2012)

Here's some info about the upcoming A8 Trinity.  Don't know what clock speeds this was at, but as far as I'm concerned in the mobile space, only performance / watt is relevant (IPC can take a back seat), and this is in the same 35W space as Llano.


----------

