• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

AMD Details Plans to Deliver 25x APU Energy Efficiency Gains by 2020

Joined
Mar 10, 2010
Messages
11,878 (2.21/day)
Location
Manchester uk
System Name RyzenGtEvo/ Asus strix scar II
Processor Amd R5 5900X/ Intel 8750H
Motherboard Crosshair hero8 impact/Asus
Cooling 360EK extreme rad+ 360$EK slim all push, cpu ek suprim Gpu full cover all EK
Memory Corsair Vengeance Rgb pro 3600cas14 16Gb in four sticks./16Gb/16GB
Video Card(s) Powercolour RX7900XT Reference/Rtx 2060
Storage Silicon power 2TB nvme/8Tb external/1Tb samsung Evo nvme 2Tb sata ssd/1Tb nvme
Display(s) Samsung UAE28"850R 4k freesync.dell shiter
Case Lianli 011 dynamic/strix scar2
Audio Device(s) Xfi creative 7.1 on board ,Yamaha dts av setup, corsair void pro headset
Power Supply corsair 1200Hxi/Asus stock
Mouse Roccat Kova/ Logitech G wireless
Keyboard Roccat Aimo 120
VR HMD Oculus rift
Software Win 10 Pro
Benchmark Scores 8726 vega 3dmark timespy/ laptop Timespy 6506
how is intel gonna die if me and a few friends will be using intel rather than amd?

Im changing because Amd takes forever to insert newer technology in their cpus and also the fact that they might never cater for enthusiasts

My friends however got tired of waiting for amd to roll out a new architecture for the Fx series (steamroller :d)?

Quick ive got one , get the net.
:D
 
Joined
Sep 7, 2011
Messages
2,785 (0.58/day)
Location
New Zealand
System Name MoneySink
Processor 2600K @ 4.8
Motherboard P8Z77-V
Cooling AC NexXxos XT45 360, RayStorm, D5T+XSPC tank, Tygon R-3603, Bitspower
Memory 16GB Crucial Ballistix DDR3-1600C8
Video Card(s) GTX 780 SLI (EVGA SC ACX + Giga GHz Ed.)
Storage Kingston HyperX SSD (128) OS, WD RE4 (1TB), RE2 (1TB), Cav. Black (2 x 500GB), Red (4TB)
Display(s) Achieva Shimian QH270-IPSMS (2560x1440) S-IPS
Case NZXT Switch 810
Audio Device(s) onboard Realtek yawn edition
Power Supply Seasonic X-1050
Software Win8.1 Pro
Benchmark Scores 3.5 litres of Pale Ale in 18 minutes.
Nice trolling mroofie but they Are going to piss these economy figures whilst dancing on intels grave (put there by qualcom and others) ;p
AMD might be making plans for the future, but that doesn't mean that the competition stands still.
This also seems to reflect the classic mindset that dug a large hole for AMD in the first place. A single minded focus on what would become K8 and Bulldozer whilst almost totally ignoring the competition and expecting Intel to persevere with Netburst and Core respectively. Intel might be wedded to x86, but that doesn't mean it's their sole focus - they do have an ARM architectural licence, and the IP deal (rare for Intel) with Rockchip tends to point to some diversification in processor strategy.

@GhostRyder
You keep dreaming those dreams son. The naïveté is refreshing. Last time I checked, Intel was built across quite a few product lines - and even taking CPUs in isolation, they basically own the x86 pro markets. HSA is all nice and dandy but at some stage it has to progress to actual implementation rather than a PPS decks and "The Future IS..." ™. For that to happen, AMD need to start delivering. They won't have IBM on board, and Dell, Cisco, and HP are all firmly entrenched in the Intel camp.
 

Aquinus

Resident Wat-man
Joined
Jan 28, 2012
Messages
13,171 (2.81/day)
Location
Concord, NH, USA
System Name Apollo
Processor Intel Core i9 9880H
Motherboard Some proprietary Apple thing.
Memory 64GB DDR4-2667
Video Card(s) AMD Radeon Pro 5600M, 8GB HBM2
Storage 1TB Apple NVMe, 4TB External
Display(s) Laptop @ 3072x1920 + 2x LG 5k Ultrafine TB3 displays
Case MacBook Pro (16", 2019)
Audio Device(s) AirPods Pro, Sennheiser HD 380s w/ FIIO Alpen 2, or Logitech 2.1 Speakers
Power Supply 96w Power Adapter
Mouse Logitech MX Master 3
Keyboard Logitech G915, GL Clicky
Software MacOS 12.1
The problem is that regardless of what AMD does, Intel can always be one step ahead because of how much money Intel has available for things like R&D.

Seriously, how do you think AMD plans to contend with CPUs like the C2750? It's like an i5, without an iGPU, twice as many cores, and the PCH put onto the CPU. It's everything you could ever want from a low power CPU with the exception of half-decent graphics, but Intel already knows how to play that game with the Iris Pro and if the consumer market ever demanded it, I'm sure Intel would deliver and it's important to remember that Intel's iGPUs aren't as crappy as they used to be (most people don't game, keep that in mind too.)

AMD should take all this PR funding and put it into R&D because pandering to the masses isn't going to make their hardware any better than it already is. I don't see Intel making claims like this nearly as often as AMD does when it comes to PR.

With all of this said, I still love my AMD graphics cards but I'm glad I decided to get an i7.
 
Joined
Apr 29, 2014
Messages
4,290 (1.11/day)
Location
Texas
System Name SnowFire / The Reinforcer
Processor i7 10700K 5.1ghz (24/7) / 2x Xeon E52650v2
Motherboard Asus Strix Z490 / Dell Dual Socket (R720)
Cooling RX 360mm + 140mm Custom Loop / Dell Stock
Memory Corsair RGB 16gb DDR4 3000 CL 16 / DDR3 128gb 16 x 8gb
Video Card(s) GTX Titan XP (2025mhz) / Asus GTX 950 (No Power Connector)
Storage Samsung 970 1tb NVME and 2tb HDD x4 RAID 5 / 300gb x8 RAID 5
Display(s) Acer XG270HU, Samsung G7 Odyssey (1440p 240hz)
Case Thermaltake Cube / Dell Poweredge R720 Rack Mount Case
Audio Device(s) Realtec ALC1150 (On board)
Power Supply Rosewill Lightning 1300Watt / Dell Stock 750 / Brick
Mouse Logitech G5
Keyboard Logitech G19S
Software Windows 11 Pro / Windows Server 2016
@HumanSmoke and you keep blowing that ignorant smoke. Funny read as per usual. Gee wonder why IBM, Dell, and some of the other camps are stuck with intel, could be that whole business that just got another settlement recently or in the past, pick your poison. Did I ever mention HSA once?

But then again I expect nothing less from you hence why I rarely care anymore what you have to say. Keep posting I have nothing to say to you.

The problem is that regardless of what AMD does, Intel can always be one step ahead because of how much money Intel has available for things like R&D.

Seriously, how do you think AMD plans to contend with CPUs like the C2750? It's like an i5, without an iGPU, twice as many cores, and the PCH put onto the CPU. It's everything you could ever want from a low power CPU with the exception of half-decent graphics, but Intel already knows how to play that game with the Iris Pro and if the consumer market ever demanded it, I'm sure Intel would deliver and it's important to remember that Intel's iGPUs aren't as crappy as they used to be (most people don't game, keep that in mind too.)

AMD should take all this PR funding and put it into R&D because pandering to the masses isn't going to make their hardware any better than it already is. I don't see Intel making claims like this nearly as often as AMD does when it comes to PR.

With all of this said, I still love my AMD graphics cards but I'm glad I decided to get an i7.

Maybe, no one is saying an i7 is not as good as anything amd has on the table. They have the best performance right now and it's not going to change for awhile.
 
Last edited:

Aquinus

Resident Wat-man
Joined
Jan 28, 2012
Messages
13,171 (2.81/day)
Location
Concord, NH, USA
System Name Apollo
Processor Intel Core i9 9880H
Motherboard Some proprietary Apple thing.
Memory 64GB DDR4-2667
Video Card(s) AMD Radeon Pro 5600M, 8GB HBM2
Storage 1TB Apple NVMe, 4TB External
Display(s) Laptop @ 3072x1920 + 2x LG 5k Ultrafine TB3 displays
Case MacBook Pro (16", 2019)
Audio Device(s) AirPods Pro, Sennheiser HD 380s w/ FIIO Alpen 2, or Logitech 2.1 Speakers
Power Supply 96w Power Adapter
Mouse Logitech MX Master 3
Keyboard Logitech G915, GL Clicky
Software MacOS 12.1
Maybe, no one is saying an i7 is not as good as anything amd has on the table. They have the best performance right now and it's not going to change for awhile.
...or power efficiency for that matter and if Intel's iGPUs continue to improve, AMD is going to lose the iGPU advantage as well which leaves them with nothing but cost. I don't know about you, but that troubles me.
 
Joined
Apr 29, 2014
Messages
4,290 (1.11/day)
Location
Texas
System Name SnowFire / The Reinforcer
Processor i7 10700K 5.1ghz (24/7) / 2x Xeon E52650v2
Motherboard Asus Strix Z490 / Dell Dual Socket (R720)
Cooling RX 360mm + 140mm Custom Loop / Dell Stock
Memory Corsair RGB 16gb DDR4 3000 CL 16 / DDR3 128gb 16 x 8gb
Video Card(s) GTX Titan XP (2025mhz) / Asus GTX 950 (No Power Connector)
Storage Samsung 970 1tb NVME and 2tb HDD x4 RAID 5 / 300gb x8 RAID 5
Display(s) Acer XG270HU, Samsung G7 Odyssey (1440p 240hz)
Case Thermaltake Cube / Dell Poweredge R720 Rack Mount Case
Audio Device(s) Realtec ALC1150 (On board)
Power Supply Rosewill Lightning 1300Watt / Dell Stock 750 / Brick
Mouse Logitech G5
Keyboard Logitech G19S
Software Windows 11 Pro / Windows Server 2016
...or power efficiency for that matter and if Intel's iGPUs continue to improve, AMD is going to lose the iGPU advantage as well which leaves them with nothing but cost. I don't know about you, but that troubles me.
True, but the chips that do have iris pro are expensive at the time. The mobile market is where these matter and where they shine.

Right now iris pros main advantage is that ram built into the chip. Depends on how far they take it, but I could dig it either way if offered at a decent price.
 
Joined
Sep 7, 2011
Messages
2,785 (0.58/day)
Location
New Zealand
System Name MoneySink
Processor 2600K @ 4.8
Motherboard P8Z77-V
Cooling AC NexXxos XT45 360, RayStorm, D5T+XSPC tank, Tygon R-3603, Bitspower
Memory 16GB Crucial Ballistix DDR3-1600C8
Video Card(s) GTX 780 SLI (EVGA SC ACX + Giga GHz Ed.)
Storage Kingston HyperX SSD (128) OS, WD RE4 (1TB), RE2 (1TB), Cav. Black (2 x 500GB), Red (4TB)
Display(s) Achieva Shimian QH270-IPSMS (2560x1440) S-IPS
Case NZXT Switch 810
Audio Device(s) onboard Realtek yawn edition
Power Supply Seasonic X-1050
Software Win8.1 Pro
Benchmark Scores 3.5 litres of Pale Ale in 18 minutes.
@HumanSmoke and you keep blowing that ignorant smoke. Gee wonder why IBM, Dell, and some of the other camps are stuck with intel
Well, talking of ignorance, IBM haven't been with Intel since the 5162......twenty-eight years ago
Did I ever mention HSA once?
Nope, but then I'd be surprised if you did since you think the computing business revolves around APU gaming laptops.
...or power efficiency for that matter and if Intel's iGPUs continue to improve, AMD is going to lose the iGPU advantage as well which leaves them with nothing but cost. I don't know about you, but that troubles me.
And so it should, as well as everyone else for that matter. AMD's business strategy seems to be based on razor thin margins ( console and consumer APUs, ARM cores for server) which means they need large sales volumes. OK when you have OEM confidence and a locked down market. With the exception of the gaming consoles -which don't net big return, that isn't the case.
True, but the chips that do have iris pro are expensive at the time. The mobile market is where these matter and where they shine.
Right now iris pros main advantage is that ram built into the chip. Depends on how far they take it, but I could dig it either way if offered at a decent price.
How long do you think it would take Intel to jam an HD 5200 into any chip if they felt that their market dominance was threatened by not having it? This is a company with a huge fabrication overcapacity.
 
Joined
Apr 29, 2014
Messages
4,290 (1.11/day)
Location
Texas
System Name SnowFire / The Reinforcer
Processor i7 10700K 5.1ghz (24/7) / 2x Xeon E52650v2
Motherboard Asus Strix Z490 / Dell Dual Socket (R720)
Cooling RX 360mm + 140mm Custom Loop / Dell Stock
Memory Corsair RGB 16gb DDR4 3000 CL 16 / DDR3 128gb 16 x 8gb
Video Card(s) GTX Titan XP (2025mhz) / Asus GTX 950 (No Power Connector)
Storage Samsung 970 1tb NVME and 2tb HDD x4 RAID 5 / 300gb x8 RAID 5
Display(s) Acer XG270HU, Samsung G7 Odyssey (1440p 240hz)
Case Thermaltake Cube / Dell Poweredge R720 Rack Mount Case
Audio Device(s) Realtec ALC1150 (On board)
Power Supply Rosewill Lightning 1300Watt / Dell Stock 750 / Brick
Mouse Logitech G5
Keyboard Logitech G19S
Software Windows 11 Pro / Windows Server 2016
Well, talking of ignorance, IBM haven't been with Intel since the 5162......twenty-eight years ago

"Are stuck", they use their own processor and mostly intel in the server and desktop world which now or are about to belong to Lenovo. Want a picture of an IBM machine with an intel processor inside?

Nope, but then I'd be surprised if you did since you think the computing business revolves around APU gaming laptops.

It being a media center since a 600 dollar APU laptop is highly unlikely to be a straight compute device. Im not surprised you don't understand there are people out there that are casual users that use there laptops as media houses and do not intend to spend a fortune on a laptop.

And so it should, as well as everyone else for that matter. AMD's business strategy seems to be based on razor thin margins ( console and consumer APUs, ARM cores for server) which means they need large sales volumes. OK when you have OEM confidence and a locked down market. With the exception of the gaming consoles -which don't net big return, that isn't the case.

Yea because over ten million devices sold is minor...

How long do you think it would take Intel to jam an HD 5200 into any chip if they felt that their market dominance was threatened by not having it? This is a company with a huge fabrication overcapacity.

Or they could just stick with what they usually do...

Now then I'm done with you, and I'll leave on a nice Mark Twain quote which i should heed. I'm sure your response will be equally hilarious but I would rather not drag this thread any further off subject than this.
 

Over_Lord

News Editor
Joined
Oct 13, 2010
Messages
764 (0.15/day)
Location
Hyderabad
System Name BAH! - - tHE ECo FReAK
Processor Athlon II X4 620 @ 1.15V
Motherboard ASUS 785G EVO
Cooling Stock
Memory Corsair Titanium 4GB DDR3 1600MHz C9
Video Card(s) Sapphire HD5850 @ 1.049v
Storage Seagate 7200.12 500GB
Display(s) BenQ G2220HD
Case Cooler Master Elite 334
Audio Device(s) Onboard
Power Supply Corsair VX550W
Software Windows 7 Ultimate x64
Reads AMD APU *
Stops reading *
Clicks on close tab*

And yet you are still here :p

Looks like somebody 'roofied' you.

See what I did there?
 
Joined
Sep 7, 2011
Messages
2,785 (0.58/day)
Location
New Zealand
System Name MoneySink
Processor 2600K @ 4.8
Motherboard P8Z77-V
Cooling AC NexXxos XT45 360, RayStorm, D5T+XSPC tank, Tygon R-3603, Bitspower
Memory 16GB Crucial Ballistix DDR3-1600C8
Video Card(s) GTX 780 SLI (EVGA SC ACX + Giga GHz Ed.)
Storage Kingston HyperX SSD (128) OS, WD RE4 (1TB), RE2 (1TB), Cav. Black (2 x 500GB), Red (4TB)
Display(s) Achieva Shimian QH270-IPSMS (2560x1440) S-IPS
Case NZXT Switch 810
Audio Device(s) onboard Realtek yawn edition
Power Supply Seasonic X-1050
Software Win8.1 Pro
Benchmark Scores 3.5 litres of Pale Ale in 18 minutes.
"Are stuck", they use their own processor and mostly intel in the server and desktop world which now or are about to belong to Lenovo. Want a picture of an IBM machine with an intel processor inside?
Oh, you suddenly want to bleat on about servers when its IBM vs Intel, but when its AMD vs Intel, server share isn't a subject for discussion? You seemed to want to focus entirely upon consumer products. I adapted.
Did I ever mention HSA once?
You do realise that AMD's whole enterprise strategy is predicated upon HSA ?
You flip-flop faster than a politician caught red handed with a rent boy
Now then I'm done with you...
Didn't you say that last time out? Oh, yes! You did
Keep posting I have nothing to say to you.
:rolleyes: o_O
Yea because over ten million devices sold is minor...
So what? Sales mean f___ all if it doesn't translate into revenue.
 
Joined
Apr 12, 2013
Messages
7,529 (1.77/day)
AMD might be making plans for the future, but that doesn't mean that the competition stands still.
This also seems to reflect the classic mindset that dug a large hole for AMD in the first place. A single minded focus on what would become K8 and Bulldozer whilst almost totally ignoring the competition and expecting Intel to persevere with Netburst and Core respectively. Intel might be wedded to x86, but that doesn't mean it's their sole focus - they do have an ARM architectural licence, and the IP deal (rare for Intel) with Rockchip tends to point to some diversification in processor strategy.

@GhostRyder
You keep dreaming those dreams son. The naïveté is refreshing. Last time I checked, Intel was built across quite a few product lines - and even taking CPUs in isolation, they basically own the x86 pro markets. HSA is all nice and dandy but at some stage it has to progress to actual implementation rather than a PPS decks and "The Future IS..." ™. For that to happen, AMD need to start delivering. They won't have IBM on board, and Dell, Cisco, and HP are all firmly entrenched in the Intel camp.
Ahem you were saying ~
The graphics units will also add support for Shared Virtual Memory. As we understand, and this is not confirmed, this feature will allow the CPU and GPU to share system memory, which should boost performance of heterogeneous applications.
Some features of Skylake graphics architecture

The fact is Intel has benefited greatly from the innovations AMD has brought to the x86 & general computing realm whilst the single biggest gift they've received from Intel in the last decade has been the bribes to OEM's circa 2006, in other words a stab in the back ! Also Nvidia is embracing HSA with CUDA 6 (software only atm) so what I see from your post is ignorance for one & secondly you (perhaps) think that Intel is pro-consumer when in fact they're virtually the exact opposite & their actions, like unfairly blocking overclocking on non Z boards just recently, over the last many years certainly proves this point !
 
Joined
Sep 7, 2011
Messages
2,785 (0.58/day)
Location
New Zealand
System Name MoneySink
Processor 2600K @ 4.8
Motherboard P8Z77-V
Cooling AC NexXxos XT45 360, RayStorm, D5T+XSPC tank, Tygon R-3603, Bitspower
Memory 16GB Crucial Ballistix DDR3-1600C8
Video Card(s) GTX 780 SLI (EVGA SC ACX + Giga GHz Ed.)
Storage Kingston HyperX SSD (128) OS, WD RE4 (1TB), RE2 (1TB), Cav. Black (2 x 500GB), Red (4TB)
Display(s) Achieva Shimian QH270-IPSMS (2560x1440) S-IPS
Case NZXT Switch 810
Audio Device(s) onboard Realtek yawn edition
Power Supply Seasonic X-1050
Software Win8.1 Pro
Benchmark Scores 3.5 litres of Pale Ale in 18 minutes.
Ahem you were saying ~
Directly from your quote:
and this is not confirmed, this feature will allow the CPU and GPU to share system memory, which should boost performance of heterogeneous applications.
Which heterogeneous applications would they be ? Would these be future applications or applications actually available?
The fact is Intel has benefited greatly from the innovations AMD has brought to the x86 & general computing realm
And? What has that got to do with AMD's strategic planning?
whilst the single biggest gift they've received from Intel in the last decade has been the bribes to OEM's circa 2006, in other words a stab in the back !
Yep. That's Intel.
Also Nvidia is embracing HSA with CUDA 6 (software only atm)
Not strictly HSA, it's unified memory pooling and isn't Nvidia part of the OpenPOWER consortium rather than the HSA Foundation?
so what I see from your post is ignorance
If you think OpenPOWER, Intel's UMA, and HSA are all interchangeable on a software level I think you're going to have to show your working before you start bandying around terms like ignorance.
secondly you (perhaps) think that Intel is pro-consumer
Maybe you should stop ascribing conclusions based on comments that haven't been made.
I'm a realist, and I see what the vendors do, how they achieve it, and the outcomes. Noting the facts doesn't imply anything other than noting the facts.
when in fact they're virtually the exact opposite & their actions, like unfairly blocking overclocking on non Z boards just recently, over the last many years certainly proves this point !
Looks like you're just looking for an excuse to vent because this has absolutely no correlation to anything I've commented on.

Looking for an argument that Intel isn't an abuser of its position? You won't find one here. Intel's modus operandi is fairly well known. Intel's failings as a moral company don't excuse AMD's years of dithering, changing of focus depending upon what others are doing, saddling themselves with a massive debt burden by paying double what ATI was worth, selling off mobile IP for peanuts, dismissing the mobile market in toto, and a host of missteps.

You want to talk about ignorance? Blame Intel's bribery of OEM's (particularly Dell) to keep AMD out of the market? Know why the settlement wasn't bigger? AMD - thanks to Jerry "Real men have fabs" Sanders were too proud to second source foundry capacity. Bribes from 2006? Sure there were....AMD also couldn't supply the vendors it already had. Think that was a blip? Analysts were warning of AMD processor shortages years before this ever became acute. AMD complaining that Dell didn't want their processors was offset to a degree by OEM's complaining that AMD chips weren't available in quantity (so, 2002, 2006, and this from 2004 - see the trend), so AMD waited until vendors were publicly complaining* (and Jerry had been put out to pasture) before AMD struck a deal with Chartered Semi....and even then used less than half their outsourcing allocation allowed under the licence agreement with Intel.

Sometimes the truth isn't as cut-and-dried as good versus evil.

* Poor AMD planning causes CPU shortages: ....But European motherboard firms, talking to the INQ on conditions of anonymity, were rather more blunt about the problem. One described the shortages as due to "bad planning".
 
Last edited:
Joined
Apr 12, 2013
Messages
7,529 (1.77/day)
Directly from your quote:
and this is not confirmed, this feature will allow the CPU and GPU to share system memory, which should boost performance of heterogeneous applications.
Which heterogeneous applications would they be ? Would these be future applications or applications actually available?
Well Intel is going to implement HSA now whether they'll call it xSA or whatever remains to be seen, OpenCL 2.0 for instance brings SVM (shared virtual memort) support & unless Intel somehow plans to delay implementation of an industry wide Open standard in their iGPU's I don't see how that piece of info is speculation.

Not strictly HSA, it's unified memory pooling and isn't Nvidia part of the OpenPOWER consortium rather than the HSA Foundation?

If you think OpenPOWER, Intel's UMA, and HSA are all interchangeable on a software level I think you're going to have to show your working before you start bandying around terms like ignorance.

OPENPOWER is completely separate from HSA, hUMA & OpenCL cause it's just something IBM's done to save their power based server live. As for Nvidia now since they're going to add OpenCL 2.x support to their GPU's it means they'll be jumping on the HSA bandwagon themselves, again it doesn't have to be called HSA to be implemented as such & I won't be surprised if MS brings OS level support for HSA in win9.

Looks like you're just looking for an excuse to vent because this has absolutely no correlation to anything I've commented on.

Looking for an argument that Intel isn't an abuser of its position? You won't find one here. Intel's modus operandi is fairly well known. Intel's failings as a moral company don't excuse AMD's years of dithering, changing of focus depending upon what others are doing, saddling themselves with a massive debt burden by paying double what ATI was worth, selling off mobile IP for peanuts, dismissing the mobile market in toto, and a host of missteps.

You want to talk about ignorance? Blame Intel's bribery of OEM's (particularly Dell) to keep AMD out of the market? Know why the settlement wasn't bigger? AMD - thanks to Jerry "Real men have fabs" Sanders were too proud to second source foundry capacity. Bribes from 2006? Sure there were....AMD also couldn't supply the vendors it already had. Think that was a blip? Analysts were warning of AMD processor shortages years before this ever became acute. AMD complaining that Dell didn't want their processors was offset to a degree by OEM's complaining that AMD chips weren't available in quantity (so, 2002, 2006, and this from 2004 - see the trend), so AMD waited until vendors were publicly complaining (and Jerry had been put out to pasture) before AMD struck a deal with Chartered Semi....and even then used less than half their outsourcing allocation allowed under the licence agreement with Intel.

Sometimes the truth isn't as cut-and-dried as good versus evil.
Not really, I've heard this "HSA being vaporware" stuff more than once & it just irks me more every time I hear it. Lastly I'll add that it isn't AMD's fault that most software/game developers are fat ass lazy turds that need spoon feeding, I mean how long has it been since we've had multicore processors & the number of applications/games properly utilizing them is still in the low hundreds at best. It took the next gen consoles for game developers to add support for four or more cores in their game engines, it'll take something bigger for them to adopt HSA but I have very little doubt that those who don't or won't will become extinct, perhaps not in the next 5yrs but certainly in a decade or so. What we as consumers can do is support (software & game) developers that promote innovation & shun those who're dinosaurs in the making.
 
Last edited:

Aquinus

Resident Wat-man
Joined
Jan 28, 2012
Messages
13,171 (2.81/day)
Location
Concord, NH, USA
System Name Apollo
Processor Intel Core i9 9880H
Motherboard Some proprietary Apple thing.
Memory 64GB DDR4-2667
Video Card(s) AMD Radeon Pro 5600M, 8GB HBM2
Storage 1TB Apple NVMe, 4TB External
Display(s) Laptop @ 3072x1920 + 2x LG 5k Ultrafine TB3 displays
Case MacBook Pro (16", 2019)
Audio Device(s) AirPods Pro, Sennheiser HD 380s w/ FIIO Alpen 2, or Logitech 2.1 Speakers
Power Supply 96w Power Adapter
Mouse Logitech MX Master 3
Keyboard Logitech G915, GL Clicky
Software MacOS 12.1
Not really, I've heard this "HSA being vaporware" stuff more than once & it just irks me more every time I hear it. Lastly I'll add that it isn't AMD's fault that most software/game developers are fat ass lazy turds that need spoon feeding, I mean how long has it been since we've had multicore processors & the number of applications/games properly utilizing them is still in the low hundreds at best. It took the next gen consoles for game developers to add support for four or more cores in their game engines, it'll take something bigger for them to adopt HSA but I have very little doubt that those who don't or won't will become extinct, perhaps not in the next 5yrs but certainly in a decade or so. What we as consumers can do is support (software & game) developers that promote innovation & shun those who're dinosaurs in the making.

Are you a software developer? Do you write concurrent code that is thread-safe and works all the time? Yeah, I didn't think so. Keep your assumptions about how people like me do my job to yourself. Don't presume to talk about something where you have absolutely no idea what kind of work needs to be done accomplish what you suggest. There are a lot of considerations that need to be made when writing concurrent code, even more so when things like data order or the order that data is processed is important because when you introduce a basic (and common,) factor like that, the benefit of threading and multi-core systems goes out the window because you still have a bottleneck and the only difference is that you moved it from a single thread to a lock where only one thread can run at once, even if you spin up 10 of them.

With all of that said, it pisses me off when people like you think that writing concurrency code that scales is easy when it's not.

For it to scale, most of it needs to be parallel, not just a tiny bit of it and I can't even begin to describe to you how complex that can get.

648px-AmdahlsLaw.svg.png

So if 50% of your workload is parallel, you'll benefit from two cores basically. HALF of your work needs to be parallel just for a 2.0 speedup... and you want code to run on how many cores again?

Wikipedia said:
The speedup of a program using multiple processors in parallel computing is limited by the time needed for the sequential fraction of the program. For example, if a program needs 20 hours using a single processor core, and a particular portion of the program which takes one hour to execute cannot be parallelized, while the remaining 19 hours (95%) of execution time can be parallelized, then regardless of how many processors are devoted to a parallelized execution of this program, the minimum execution time cannot be less than that critical one hour. Hence the speedup is limited to at most 20×.
http://en.wikipedia.org/wiki/Amdahl's_law
 
Last edited:
Joined
Apr 12, 2013
Messages
7,529 (1.77/day)
Are you a software developer? Do you write concurrent code that is thread-safe and works all the time? Yeah, I didn't think so. Keep your assumptions about how people like me do my job to yourself. Don't presume to talk about something where you have absolutely no idea what kind of work needs to be done accomplish what you suggest. There are a lot of considerations that need to be made when writing concurrent code, even more so when things like data order or the order that data is processed is important because when you introduce a basic (and common,) factor like that, the benefit of threading and multi-core systems goes out the window because you still have a bottleneck and the only difference is that you moved it from a single thread to a lock where only one thread can run at once, even if you spin up 10 of them.

With all of that said, it pisses me off when people like you think that writing concurrency code that scales is easy when it's not.

For it to scale, most of it needs to be parallel, not just a tiny bit of it and I can't even begin to describe to you how complex that can get.
I don't think there's any need to get offended by something that's posted regularly on this & many other forums, though in a more subtle & (somewhat) polite way. What do you think X game being a cr@ppy port means or Y application being slow as hell on my octa core signifies ?

Also people (including but not limited to developers) do need a push to get things done more efficiently, for instance how many browsers were using GPU acceleration before Google (chrome) pushed them into obsolescence ? How many browsers still don't use SSE 4x or other advanced instruction sets, this isn't just you I'm talking about but it also is not a blanket statement targeting every software/game developer out there, since I clearly put emphasis on most !

View attachment 57351
So if 50% of your workload is parallel, you'll benefit from two cores basically. HALF of your work needs to be parallel just for a 2.0 speedup... and you want code to run on how many cores again?
Irrelevant since I didn't mention the type of workload & thus you shouldn't try to sell Amdahl's law as an argument in such case.
 
Last edited:

Aquinus

Resident Wat-man
Joined
Jan 28, 2012
Messages
13,171 (2.81/day)
Location
Concord, NH, USA
System Name Apollo
Processor Intel Core i9 9880H
Motherboard Some proprietary Apple thing.
Memory 64GB DDR4-2667
Video Card(s) AMD Radeon Pro 5600M, 8GB HBM2
Storage 1TB Apple NVMe, 4TB External
Display(s) Laptop @ 3072x1920 + 2x LG 5k Ultrafine TB3 displays
Case MacBook Pro (16", 2019)
Audio Device(s) AirPods Pro, Sennheiser HD 380s w/ FIIO Alpen 2, or Logitech 2.1 Speakers
Power Supply 96w Power Adapter
Mouse Logitech MX Master 3
Keyboard Logitech G915, GL Clicky
Software MacOS 12.1
I don't think there's any need to get offended by something that's posted regularly on this & many other forums, though in a more subtle & (somewhat) polite way. What do you think X game being a cr@ppy port means or Y application being slow as hell on my octa core signifies?

It signifies that maybe either the developers had little time and/or little funding to make an already existent game run on a different platform so it's realistic to assume that maybe the code can't easily be made to utilize more cores with investing a lot more time (which to businesses is money). They're only going to spend so much time on making it perform better than what it needs to.

Also people (including but not limited to developers) do need a push to get things done more efficiently, for instance how many browsers were using GPU acceleration before Google (chrome) pushed them into obsolescence ? How many browsers still don't use SSE 4x or other advanced instruction sets, this isn't just you I'm talking about but it also is not a blanket statement targeting every software/game developer out there, since I clearly put emphasis on most!

GPU acceleration started to become important on browsers because of the complexity of rendering pages now versus pages several years ago. Web applications are much more rich and have much more client-side scripting that goes on that alter the page in ways that make it more intensive then they used to be. Now that's just rendering, because it was becoming a bottleneck and in Google's case with Chrome, solved it. However that doesn't mean that chrome uses any more threads to accomplish the same task.

Irrelevant since I didn't mention the type of workload & thus you shouldn't try to sell Amdahl's law as an argument in such case.
You don't need to mention the type of workload for it to be relevant because performance and the ability to make any application performant on multi-core systems depends on the kind of workload. You can't talk about any level of parallelism without discussing the workload that is to be run in parallel. It's a selling point that making applications multi-threaded highly depends on the application, not all applications can be made to run in parallel and the impression you're giving me is that you don't believe that is the case and that is the point I was trying to prove with Amdahl's law.
 
Joined
Apr 12, 2013
Messages
7,529 (1.77/day)
It signifies that maybe either the developers had little time and/or little funding to make an already existent game run on a different platform so it's realistic to assume that maybe the code can't easily be made to utilize more cores with investing a lot more time (which to businesses is money). They're only going to spend so much time on making it perform better than what it needs to.
What would you say about the likes of EA (DICE) & their bug filled launch of BF4 OR is that you're downplaying the fault of developers in such a mess ? I would put winrar/winzip in the same category though they've done a lot especially in the last couple of years in implementing multi-core enhancements & hardware (OpenCL) acceleration respectively.
GPU acceleration started to become important on browsers because of the complexity of rendering pages now versus pages several years ago. Web applications are much more rich and have much more client-side scripting that goes on that alter the page in ways that make it more intensive then they used to be. Now that's just rendering, because it was becoming a bottleneck and in Google's case with Chrome, solved it. However that doesn't mean that chrome uses any more threads to accomplish the same task.
The GPU acceleration was just an example of how developers need to be aware of the demands of this ever computing computing landscape before some of them become irrelevant btw you still didn't answer why don't major browsers implement SSE 4x or other advanced instruction sets ? FYI firefox had GPU acceleration even before IE & chrome but they enabled it by default only after chrome forced them to, the same goes for IE & their implementation of it since version 9.
You don't need to mention the type of workload for it to be relevant because performance and the ability to make any application performant on multi-core systems depends on the kind of workload. You can't talk about any level of parallelism without discussing the workload that is to be run in parallel. It's a selling point that making applications multi-threaded highly depends on the application, not all applications can be made to run in parallel and the impression you're giving me is that you don't believe that is the case and that is the point I was trying to prove with Amdahl's law.
Again you're nitpicking on what I said, my basic point was that most developers (not all of'em) don't use the tools at their disposal as effectively as they could or rather as they should.
 

Aquinus

Resident Wat-man
Joined
Jan 28, 2012
Messages
13,171 (2.81/day)
Location
Concord, NH, USA
System Name Apollo
Processor Intel Core i9 9880H
Motherboard Some proprietary Apple thing.
Memory 64GB DDR4-2667
Video Card(s) AMD Radeon Pro 5600M, 8GB HBM2
Storage 1TB Apple NVMe, 4TB External
Display(s) Laptop @ 3072x1920 + 2x LG 5k Ultrafine TB3 displays
Case MacBook Pro (16", 2019)
Audio Device(s) AirPods Pro, Sennheiser HD 380s w/ FIIO Alpen 2, or Logitech 2.1 Speakers
Power Supply 96w Power Adapter
Mouse Logitech MX Master 3
Keyboard Logitech G915, GL Clicky
Software MacOS 12.1
What would you say about the likes of EA (DICE) & their bug filled launch of BF4 OR is that you're downplaying the fault of developers in such a mess ? I would put winrar/winzip in the same category though they've done a lot especially in the last couple of years in implementing multi-core enhancements & hardware (OpenCL) acceleration respectively.

That depends? If it's the fault of them using poorly designed libraries that they wrote in the past, it could be a cost/time saving measure, definitely a bad one, but the company could have pushed them down that road. It could be the developer's fault, but that really depends on the timeline they had for doing the work they had to get done. Development doesn't always go the way you want it to. Sometimes that's the developer's fault and sometimes it isn't. It's hard to say without being inside the company and seeing what is going on but, one thing is certain, it's definitely EA (DICE)'s fault as a whole. :) I don't dispute that for a second.

I would put winrar/winzip and other compression utilities in the category of workloads that are more easily paralleled than others because of the nature of what they're doing. Once again, this comes down to the workload argument. Archival applications and games are two very different kinds of workloads, it's a lot easier to make something like LZMA2 to run in parallel than something like a game which is incredibly more stateful than something like an algorithm for compression or decompression. This isn't a matter of tools, you could have all the tools in the world but that won't change the nature of some applications and how they need to be implemented. OpenCL doesn't solve all programming issues and it doesn't mysteriously make things that couldn't be run in parallel to suddenly able to be. These tools you talk about enable already parallel applications to scale a lot better and across more compute cores than they did before, it doesn't solve the problem of having to make your workload thread-safe without being detrimental to performance in the first place.

You complain about me nitpicking, but you're pointing out things that require that level of analysis and detail because problems like these aren't as easy to solve as you make them out to be.

One question, have you ever tried to write some OpenCL code and running it on a GPU? Try doing something useful in it if you haven't and you'll understand real quickly why only applications that are mostly parallel code in the first place use OpenCL. I get the impression that you haven't so you shouldn't talk about something if you've never done it. I am, because I have... trust me, it's not intuitive, it's hard to use, and it's only helpful in very selective situations. I would never use it unless I was working with purely numerical data that was tens of gigabytes large or bigger and only if the algorithm I'm implementing is almost completely stateless (or functional if you will). Games (other than the rendering part, which GPUs are already doing,) hardly fit any of those criteria. It's not that developers aren't using OpenCL, it's that they can't or it doesn't make sense to in most real world applications in the consumer market.

I do enjoy listening to you try to say what developers are and are not doing right when you're not in their shoes. Even as a developer I wouldn't presume to think I knew more about another developer's project than they do without even seeing the code itself and having worked with it. So I find it both amusing and disturbing that you feel that you can voice you opinion in such an authoritative way when not even I would make those kinds of claims given my own experience in the subject as I'm a developer professionally and I'm even working on a library that uses multiple threads.

Tell me more about why you're right.
 
Joined
Nov 4, 2005
Messages
11,983 (1.72/day)
System Name Compy 386
Processor 7800X3D
Motherboard Asus
Cooling Air for now.....
Memory 64 GB DDR5 6400Mhz
Video Card(s) 7900XTX 310 Merc
Storage Samsung 990 2TB, 2 SP 2TB SSDs, 24TB Enterprise drives
Display(s) 55" Samsung 4K HDR
Audio Device(s) ATI HDMI
Mouse Logitech MX518
Keyboard Razer
Software A lot.
Benchmark Scores Its fast. Enough.
TPU plans to hand out 25x more free graphics cards to readers by 2020
Comparative analysis shows this is plausible given the current trend of 0, I for one approve this message.

**Edit** There is a lot of butthurt in this thread, over PR, nothing more. I am glad they have this goal.

How about we debate the power useage and how perhaps a embedded capacitor(s) that can provide the peak power required when firing up more cores or execute units could provide us with a 200Mhz CPU that clocks to 4Ghz instantly?

Decoupling capacitor built in anyone?
 
Last edited:

Fx

Joined
Oct 31, 2008
Messages
1,332 (0.23/day)
Location
Portland, OR
Processor Ryzen 2600x
Motherboard ASUS ROG Strix X470-F Gaming
Cooling Noctua
Memory G.SKILL Flare X Series 16GB DDR4 3466
Video Card(s) EVGA 980ti FTW
Storage (OS)Samsung 950 Pro (512GB), (Data) WD Reds
Display(s) 24" Dell UltraSharp U2412M
Case Fractal Design Define R5
Audio Device(s) Sennheiser GAME ONE
Power Supply EVGA SuperNOVA 650 P2
Mouse Mionix Castor
Keyboard Deck Hassium Pro
Software Windows 10 Pro x64
A few years ago they were bragging about how APUs would revolutionize the industry - and they did, by letting Intel get so far ahead, they don't even have to try anymore - basically grinding the industry to a halt as far as real innovations and improvements. Thanks AMD! You guys have a lot of nerve, spouting more nonsense about your crappy APUs, whose useless graphics core is too much for general use and not enough for gaming...

You sound like a uneducated fanboy. Useless APUs... really? You have some serious reading, comprehension, and contemplating to do.
 
Joined
Sep 15, 2011
Messages
6,722 (1.39/day)
Processor Intel® Core™ i7-13700K
Motherboard Gigabyte Z790 Aorus Elite AX
Cooling Noctua NH-D15
Memory 32GB(2x16) DDR5@6600MHz G-Skill Trident Z5
Video Card(s) ZOTAC GAMING GeForce RTX 3080 AMP Holo
Storage 2TB SK Platinum P41 SSD + 4TB SanDisk Ultra SSD + 500GB Samsung 840 EVO SSD
Display(s) Acer Predator X34 3440x1440@100Hz G-Sync
Case NZXT PHANTOM410-BK
Audio Device(s) Creative X-Fi Titanium PCIe
Power Supply Corsair 850W
Mouse Logitech Hero G502 SE
Software Windows 11 Pro - 64bit
Benchmark Scores 30FPS in NFS:Rivals
Seriously I don't get AMD. They are a huge company, so really, cannot they afford hiring 2 or 3 top design engineers to design a new top CPU that should compete with the latest i7 from Intel?? I mean, geez, even reverse engineer the stuff, or follow and try to improve Intel's design if they are in a lack of inspiration. The CPU design, architecture and even detailed charts and stuff are all over the internet.
Honestly, I don't get it...

Reads AMD APU *
Stops reading *
Clicks on close tab*
That's a little childish I guess. Latest top i7 CPUs from Intel are also APUs.
 
Last edited:
Joined
Jun 22, 2014
Messages
446 (0.12/day)
System Name Desktop / "Console"
Processor Ryzen 5950X / Ryzen 5800X
Motherboard Asus X570 Hero / Asus X570-i
Cooling EK AIO Elite 280 / Cryorig C1
Memory 32GB Gskill Trident DDR4-3600 CL16 / 16GB Crucial Ballistix DDR4-3600 CL16
Video Card(s) RTX 4090 FE / RTX 2080ti FE
Storage 1TB Samsung 980 Pro, 1TB Sabrent Rocket 4 Plus NVME / 1TB Sabrent Rocket 4 NVME, 1TB Intel 660P
Display(s) Alienware AW3423DW / LG 65CX Oled
Case Lian Li O11 Mini / Sliger CL530 Conswole
Audio Device(s) Sony AVR, SVS speakers & subs / Marantz AVR, SVS speakers & subs
Power Supply ROG Loki 1000 / Silverstone SX800
VR HMD Quest 3
That depends? If it's the fault of them using poorly designed libraries that they wrote in the past, it could be a cost/time saving measure, definitely a bad one, but the company could have pushed them down that road. It could be the developer's fault, but that really depends on the timeline they had for doing the work they had to get done. Development doesn't always go the way you want it to. Sometimes that's the developer's fault and sometimes it isn't. It's hard to say without being inside the company and seeing what is going on but, one thing is certain, it's definitely EA (DICE)'s fault as a whole. :) I don't dispute that for a second.

I would put winrar/winzip and other compression utilities in the category of workloads that are more easily paralleled than others because of the nature of what they're doing. Once again, this comes down to the workload argument. Archival applications and games are two very different kinds of workloads, it's a lot easier to make something like LZMA2 to run in parallel than something like a game which is incredibly more stateful than something like an algorithm for compression or decompression. This isn't a matter of tools, you could have all the tools in the world but that won't change the nature of some applications and how they need to be implemented. OpenCL doesn't solve all programming issues and it doesn't mysteriously make things that couldn't be run in parallel to suddenly able to be. These tools you talk about enable already parallel applications to scale a lot better and across more compute cores than they did before, it doesn't solve the problem of having to make your workload thread-safe without being detrimental to performance in the first place.

You complain about me nitpicking, but you're pointing out things that require that level of analysis and detail because problems like these aren't as easy to solve as you make them out to be.

One question, have you ever tried to write some OpenCL code and running it on a GPU? Try doing something useful in it if you haven't and you'll understand real quickly why only applications that are mostly parallel code in the first place use OpenCL. I get the impression that you haven't so you shouldn't talk about something if you've never done it. I am, because I have... trust me, it's not intuitive, it's hard to use, and it's only helpful in very selective situations. I would never use it unless I was working with purely numerical data that was tens of gigabytes large or bigger and only if the algorithm I'm implementing is almost completely stateless (or functional if you will). Games (other than the rendering part, which GPUs are already doing,) hardly fit any of those criteria. It's not that developers aren't using OpenCL, it's that they can't or it doesn't make sense to in most real world applications in the consumer market.

I do enjoy listening to you try to say what developers are and are not doing right when you're not in their shoes. Even as a developer I wouldn't presume to think I knew more about another developer's project than they do without even seeing the code itself and having worked with it. So I find it both amusing and disturbing that you feel that you can voice you opinion in such an authoritative way when not even I would make those kinds of claims given my own experience in the subject as I'm a developer professionally and I'm even working on a library that uses multiple threads.

Tell me more about why you're right.

That was quite the amusing exchange lol! Gotta love the air charm developers. Anyways, if I may ask something somewhat on the topic of multi threaded gaming.. I happen to not be a developer and I do not write code, so it was enlightening reading your thoughts on the whole workload dependencies for multi threading. I too have wondered why gaming has taken a while to really embrace multi core processors and your explanation helps to understand some of those reasons (though I can say I never thought it was because devs are fat and lazy). My question though, is with Mantle and DX12, it seems that we are looking for more and more ways to off load the CPU as much as possible, and to me that seems like it makes the need to try and heavily thread games (which as you say may not really be possible anyways) kind of irrelevant. It seems like direction of making games is that the less and less the CPU is involved, the better. Certainly correct me if I'm wrong, but I don't understand why people like whats-his-name that was trying to argue programming with you, are wanting more and more CPU utilization when, to me at least, it seems pretty clear that the road to more performance in games relies less on the CPU and off loading more to the GPU. (Maybe they just want to justify their expensive purchase of a many core CPU's to play Battlefield? I dunno..)

I apologize, I didn't mean to completely ignore the topic of the article in the first place... But as a user of both AMD and Intel (actually an AMD user from Socket A up until my first Intel build at the release of Core 2) I certainly hope that they achieve these goals simply for the reason that any innovation from any team is always good for us. I can remember when AMD first mentioned Fusion, and adding the GPU to the CPU die... then low and behold, here comes Intel taking that idea and running with it and beating AMD to market initially with a crappy solution (though AMD still has the better iGPU today) and look where we are now with integrated graphics. I for one do appreciate the strides made with iGPU's for systems such as my Surface Pro. (I would really like to see an AMD APU version of one!) Or when AMD puts the memory controller on die with the Athlon 64, then here comes Intel with Nehalem doing the same thing. It really is too bad that they took such a step backwards with Bulldozer when they seemed to have good momentum going and hanging with Intel back in those days. I really hope that the day may come again when they are close and drive each other to really innovate. I've been an Intel user now since Core 2 and would love to feel like I have another option when building my desktops.... possibly by the time ill be looking to replace my upcoming Haswell-E system?
 
Last edited:

Aquinus

Resident Wat-man
Joined
Jan 28, 2012
Messages
13,171 (2.81/day)
Location
Concord, NH, USA
System Name Apollo
Processor Intel Core i9 9880H
Motherboard Some proprietary Apple thing.
Memory 64GB DDR4-2667
Video Card(s) AMD Radeon Pro 5600M, 8GB HBM2
Storage 1TB Apple NVMe, 4TB External
Display(s) Laptop @ 3072x1920 + 2x LG 5k Ultrafine TB3 displays
Case MacBook Pro (16", 2019)
Audio Device(s) AirPods Pro, Sennheiser HD 380s w/ FIIO Alpen 2, or Logitech 2.1 Speakers
Power Supply 96w Power Adapter
Mouse Logitech MX Master 3
Keyboard Logitech G915, GL Clicky
Software MacOS 12.1
That was quite the amusing exchange lol! Gotta love the air charm developers. Anyways, if I may ask something somewhat on the topic of multi threaded gaming.. I happen to not be a developer and I do not write code, so it was enlightening reading your thoughts on the whole workload dependencies for multi threading. I too have wondered why gaming has taken a while to really embrace multi core processors and your explanation helps to understand some of those reasons (though I can say I never thought it was because devs are fat and lazy). My question though, is with Mantle and DX12, it seems that we are looking for more and more ways to off load the CPU as much as possible, and to me that seems like it makes the need to try and heavily thread games (which as you say may not really be possible anyways) kind of irrelevant. It seems like direction of making games is that the less and less the CPU is involved, the better. Certainly correct me if I'm wrong, but I don't understand why people like whats-his-name that was trying to argue programming with you, are wanting more and more CPU utilization when, to me at least, it seems pretty clear that the road to more performance in games relies less on the CPU and off loading more to the GPU. (Maybe they just want to justify their expensive purchase of a many core CPU's to play Battlefield? I dunno..)

I apologize, I didn't mean to completely ignore the topic of the article in the first place... But as a user of both AMD and Intel (actually an AMD user from Socket A up until my first Intel build at the release of Core 2) I certainly hope that they achieve these goals simply for the reason that any innovation from any team is always good for us. I can remember when AMD first mentioned Fusion, and adding the GPU to the CPU die... then low and behold, here comes Intel taking that idea and running with it and beating AMD to market initially with a crappy solution (though AMD still has the better iGPU today) and look where we are now with integrated graphics. I for one do appreciate the strides made with iGPU's for systems such as my Surface Pro. (I would really like to see an AMD APU version of one!) Or when AMD puts the memory controller on die with the Athlon 64, then here comes Intel with Nehalem doing the same thing. It really is too bad that they took such a step backwards with Bulldozer when they seemed to have good momentum going and hanging with Intel back in those days. I really hope that the day may come again when they are close and drive each other to really innovate. I've been an Intel user now since Core 2 and would love to feel like I have another option when building my desktops.... possibly by the time ill be looking to replace my upcoming Haswell-E system?

No, that's the kind of questions that people need to best asking. It's important to remember one basic thing: games are very complex and that Mantle and DX12 are only doing part of the task. More graphics and rendering related tasks are being offloaded to the GPU because that is where it belongs. However this doesn't change anything for game logic itself and I'm sure if you've played Civilization 5, you'll see how as the world gets bigger, the time it takes to end each turn takes longer and longer.

There are really maybe three situations that I feel are important for multi-threading:
A: When you know that you want, have everything you need to get it but is something that you don't need until later. (a form of speculative execution)
B: When you have a task that needs to run multiple times on multiple items and doesn't produce side effects. (I.E. Graphics rendering or protein folding)
C: A task that occurs regularly (every x seconds, or x milliseconds) and requires very little coordination.

As soon as you have side effects or have tasks that rely on the output of several other tasks, the ability to make something usefully multi-threaded goes out the window and people might not realize it, but games are one of the most stateful kinds of applications you can have and just "make it multi-threaded" as it doesn't solve problems. In fact if the workload wasn't properly made to run in parallel, making an application multi-threaded can degrade performance when overhead is most costly than the speedup that's gained from it or even make the code that's executing more confusing because of any locking or thread-coordination you may have to do.

I currently develop with Clojure, which is a functional language on top of the JVM among other platforms which I don't typically use (except for ClojureScript which is interesting).
Wikipedia said:
Clojure (pronounced like "closure"[3]) is a dialect of the Lisp programming language created by Rich Hickey. Clojure is a general-purpose programming language with an emphasis on functional programming. It runs on theJava Virtual Machine, Common Language Runtime, and JavaScript engines. Like other Lisps, Clojure treats code as data and has a macro system.

Clojure's focus on programming with immutable values and explicit progression-of-time constructs are intended to facilitate the development of more robust programs, particularly multithreaded ones.

...and

Wikipedia said:
Hickey developed Clojure because he wanted a modern Lisp for functional programming, symbiotic with the established Java platform, and designed for concurrency.[5][6]

Clojure's approach to state is characterized by the concept of identities,[7] which represent it as a series of immutable states over time. Since states are immutable values, any number of workers can operate on them in parallel, and concurrency becomes a question of managing changes from one state to another. For this purpose, Clojure provides several mutable reference types, each having well-defined semantics for the transition between states.

To make a long story short, application state is what makes applications demand single-threaded performance and not managing it well is what reinforces that.
 
Last edited:
Joined
May 21, 2008
Messages
967 (0.16/day)
Processor Ryzen 7 5800X3D
Motherboard MSI MAG X570S Tomahawk Max WiFi
Cooling EK Supremacy EVO Elite + EK D5 + EK 420 Rad, TT Toughfan 140x3, TT Toughfan 120x2, Arctic slim 120
Memory 32GB GSkill DDR4-3600 (F4-3600C16-8GVKC)
Video Card(s) Gigabyte Radeon RX 7900XTX Gaming OC
Storage WDBlack SN850X 4TB, Samsung 950Pro 512GB, Samsung 850EVO 500GB, 6TB WDRed, 36TB NAS, 8TB Lancache
Display(s) Benq XL2730Z (1440P 144Hz, TN, Freesync) & 2x ASUS VE248
Case Corsair Obsidian 750D
Audio Device(s) Topping D50S + THX AAA 789, TH-X00 w/ V-Moda Boompro; 7Hz Timeless
Power Supply Corsair HX1000i
Mouse Sharkoon Fireglider optical
Keyboard Corsair K95 RGB
Software Windows 11 Pro
DEM DANG AMD TURK URR JERBS or something.

Comparative analysis shows this is plausible given the current trend of 0, I for one approve this message.
I concur.

I, for one, would like one of these 0 new free cards. With a 25X improvement over the current 0 free cards, it should not be any issue for me to receive [RESULT UNDEFINED]

**Edit** There is a lot of butthurt in this thread, over PR, nothing more. I am glad they have this goal.

How about we debate the power useage and how perhaps a embedded capacitor(s) that can provide the peak power required when firing up more cores or execute units could provide us with a 200Mhz CPU that clocks to 4Ghz instantly?

Decoupling capacitor built in anyone?

Your post is invalid. Minimum butthurt level not met. Ignoring.
 
Top