• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

AMD Charts Path for Future of its GPU Architecture

AsRock

TPU addict
Joined
Jun 23, 2007
Messages
19,084 (3.00/day)
Location
UK\USA
That for sure is a good thing. My comments were just regarding how funny it is that after so many years of AMD promoting VLIW and telling everyone and dog that VLIW was the way to go and a much better approach. Even downplaying and mocking Fermi, well they are going to do the same thing Nvidia has been doing for years.

Maybe AMD's way is better but it's wiser to do what nVidia started ?. As we all know companys have not really supported AMD all that well. And all so know AMD don't have shed loads of money to get some thing fully supported.

Not trying to say your wrong just saying we don't know both sides of the story or reasoning behind it.
 
Joined
Apr 19, 2011
Messages
2,198 (0.44/day)
Location
So. Cal.
Nvidia bought a Physic company… AMD bought a graphics company. So yes it make sense that Nvidia wanted to get and got a lead. Although they wanted and kept it (as much as they could), their proprietary Intellectual Property, which is understandable.

AMD got in the graphic side, dusting off ATI and got them back in contention, all along wanting to do achieve this. It just takes time and research is bearing fruit.

The best reason for us that AMD appears to maintain to the open specification, and that will really make more developer want in.
 
Joined
Apr 21, 2010
Messages
146 (0.03/day)
Location
Perth, Australia
Processor 5800x3d
Motherboard Asus B550 Gaming-F
Cooling Ek 240 Aio
Memory Gskill Trident Neo 4000 18-22-22-42 @3800 fclk 1900
Video Card(s) 2080ti
Storage 1 TB Nvme
Power Supply Seasonic 750w
Software Win 11
this could potentially lead to cuda becoming open, if AMD can get enough support from developers, they'll have to or risk seeing it fall to the way side in favor of somthing that supports both.
 

Benetanegia

New Member
Joined
Sep 11, 2009
Messages
2,680 (0.48/day)
Location
Reaching your left retina.
this is basically what intel tried with larrabee and failed

I always thought they failed because they didn't have good enough graphics drivers and as an accelerator it would not be financially viable. I remember reading that for HPC Larrabee was good enough, but you know better that me that in order for these big chips to be viable, you need the consumer market in order to have some volume and refine the process, bin chips, etc. Even if it's an small market like the enthusiast GPU market, with less than 1 million cards sold, that's far more than the 10's of thousands HPC cards you can sell. At least for now. Maybe in some years, with more demand, it would make sense to create a different chip for HPC, but then again the industry is moving in the opposite direction, and I think it's the right direction.

this could potentially lead to cuda becoming open, if AMD can get enough support from developers, they'll have to or risk seeing it fall to the way side in favor of somthing that supports both.

Eh? No. CUDA will dissapear sometime in the future most probably, when OpenCL caches on. OpenCL is 95% similar to CUDA anyway, if you have to believe CUDA/OpenCL developers and it's free so Nvidia doesn't gain anything from the use of CUDA. It will not go anywhere now and it's not going to be in 1 or 2 years probably, because Nvidia keeps updating CUDA every now and then and stays way ahead with more features (the advantage of not depending on stardardization by a consortium). At some point it should stagnate and OpenCL should be able to catch up, even if it's evolution depends on the Khronos group.
 
Last edited:
Joined
Jan 2, 2009
Messages
9,899 (1.70/day)
Location
Essex, England
System Name My pc
Processor Ryzen 5 3600
Motherboard Asus Rog b450-f
Cooling Cooler master 120mm aio
Memory 16gb ddr4 3200mhz
Video Card(s) MSI Ventus 3x 3070
Storage 2tb intel nvme and 2tb generic ssd
Display(s) Generic dell 1080p overclocked to 75hz
Case Phanteks enthoo
Power Supply 650w of borderline fire hazard
Mouse Some wierd Chinese vertical mouse
Keyboard Generic mechanical keyboard
Software Windows ten
AMD has followed a totally passive approach because that's the cheaper approach. I'm not saying that's a bad strategy for them, but they have not fought for the GPGPU side until very recently.


No they haven't man, they just don't bang on about it.

They talk directly to developers and have had a forum running for years where people can communicate about it.

Go on the AMD developer forums to see : ]
 
Joined
Apr 21, 2010
Messages
146 (0.03/day)
Location
Perth, Australia
Processor 5800x3d
Motherboard Asus B550 Gaming-F
Cooling Ek 240 Aio
Memory Gskill Trident Neo 4000 18-22-22-42 @3800 fclk 1900
Video Card(s) 2080ti
Storage 1 TB Nvme
Power Supply Seasonic 750w
Software Win 11
Eh? No. CUDA will dissapear sometime in the future most probably, when OpenCL caches on. OpenCL is 95% similar to CUDA anyway, if you have to believe CUDA/OpenCL developers and it's free so Nvidia doesn't gain anything from the use of CUDA. It will not go anywhere now and it's not going to be in 1 or 2 years probably, because Nvidia keeps updating CUDA every now and then and stays way ahead with more features (the advantage of not depending on stardardization by a consortium). At some point it should stagnate and OpenCL should be able to catch up, even if it's evolution depends on the Khronos group.

all i was saying is if nvidia plans on seeing CUDA through the next 5 years or so they'll almost certainly have to open it up, i don't know the specifics of CUDA vs openCL but my understanding was that CUDA, as it stands is the more robust platform.
 
Joined
Mar 9, 2011
Messages
194 (0.04/day)
Location
Montreal, Canada
Processor Phenom II 955 @ 3955Mhz 1.45v
Motherboard ASUS M4A79XTD EVO
Cooling CoolerMaster Hyper TX3 push-pull /2x140mm + 2x230mm + 2x120mm = super noisy computer
Memory 4x2Gb Kingston DDR3-1333 8-8-8-22 @ 1527Mhz
Video Card(s) Crossfire 2x Sapphire Radeon 6850 @ 850/1200
Storage 320Gb Western Digital WD3200AAJS
Display(s) Samsung 23" 1920x1080
Case Azza Solano 1000R Full-Tower
Audio Device(s) VIA VT1708S (integrated) + quadraphonic speakers
Power Supply CoolerMaster Extreme Power Plus 700w
Software Windows 7 Ultimate 64bit
"Full GPU support of C, C++ and other high-level languages"

i know that the GPU is way faster than the CPU,
so does this mean that GPU will replace the CPU in common tasks also??
 
Joined
Jul 10, 2010
Messages
1,233 (0.23/day)
Location
USA, Arizona
System Name SolarwindMobile
Processor AMD FX-9800P RADEON R7, 12 COMPUTE CORES 4C+8G
Motherboard Acer Wasp_BR
Cooling It's Copper.
Memory 2 x 8GB SK Hynix/HMA41GS6AFR8N-TF
Video Card(s) ATI/AMD Radeon R7 Series (Bristol Ridge FP4) [ACER]
Storage TOSHIBA MQ01ABD100 1TB + KINGSTON RBU-SNS8152S3128GG2 128 GB
Display(s) ViewSonic XG2401 SERIES
Case Acer Aspire E5-553G
Audio Device(s) Realtek ALC255
Power Supply PANASONIC AS16A5K
Mouse SteelSeries Rival
Keyboard Ducky Channel Shine 3
Software Windows 10 Home 64-bit (Version 1607, Build 14393.969)
1. The architecture explained in this diagram is the HD 7000

VLIW5 -> VLIW4 -> ACE or CU




http://www.realworldtech.com/forums/index.cfm?action=detail&id=120431&threadid=120411&roomid=2
Name: David Kanter 6/15/11

Dan Fay on 6/14/11 wrote:
---------------------------
>Hi David,
>
>If you're not already planning to do so, I'd be really curious how well this architecture
>is expected to perform with GPGPU.
>
>
>Thanks!

At a high level, it appears that their next architecture will exceed Fermi in a number of areas. This is to be expected, and I'll be most interested to see how Kepler pushes programmability forward.

Some of the GPGPU improvements include:
1. Real caches
2. Graphs of data parallel kernels
3. Exceptions, recursion, function calls
4. Better branching, predication, masking, control flow
5. No more VLIW, instead a scalar+vector arch with fewer scheduling rules and more regular code generation
6. Acquire/release consistency model
7. ECC support for some SKUs
8. Substantially better DP performance for some SKUs
9. Faster global atomics

I don't really want to give away too much, since I will be working on an article soon.


David

http://www.realworldtech.com/

^wait for the article dar

2. In 2 years you will see this GPU in the Z-series APU(The tablet APU)

"Full GPU support of C, C++ and other high-level languages"

i know that the GPU is way faster than the CPU,
so does this mean that GPU will replace the CPU in common tasks also??

In the cloud future yes, CPU will only need to command in the future

Fermi already does most of those things, so it's quite the opposite. Many of the new features in AMD's design were implemented in G80 5 years ago or later in GT200. AMD is way behind on this and is almost funny to see that they are going to follow the same architectural principle as Nvidia is being using for the past 5 years. Of course they are going to make the jump instead of doing it gradually like Nvidia did, but that's only posible thanks to Nvidia doing the hard work and opening doors for years.

AMD GPUs have been GPGPU compatible since the high end GPUs could do DP

This architecture just allows a bigger jump(ahead of Kepler)

Hopefully it won't turn into another DX10.1. ATI does it, but NV says no so the industry caves to NV.

Course this is much bigger. Saw this coming. Our CPUs are gonna be replaced by GPUs eventually. Those who laughed at AMD's purchase of ATI...heh. Nice move and I guess it makes more sense to ditch the ATI name if you are gonna eventually merge the tech even more. Oh well, I still won't ever call their discrete GPUs AMD.

Nvidia was very late, some late 200 series can do DX10.1 but not very well

Nvidia has been much more in contact with their GPGPU customers, asking what they needed and implementing it. And once it was inplemented and tested, by asking what's next and implementing that too. They have been getting the answers and now AMD only had to implement those. Nvidia has been investing a lot in universities to teach and promote GPGPU for a very long time too. Much sooner than anyone else thought about promoting the GPGPU route.

AMD has followed a totally passive approach because that's the cheaper approach. I'm not saying that's a bad strategy for them, but they have not fought for the GPGPU side until very recently.



In fact yes. Entrepreneur companies constantly invest in products whose viability is still in question and with little markets. They create the market.

There's nothing wrong in being one of the followers, just give credit where credit is due. And IMO AMD deserves none.



They have had top performers in gaming. Other than that Nvdia has been way ahead in professional markets.

And AMD did not pioneer GPGPU. It was a group in Standford who did it and yes they used X1900 cards, and yes AMD collaborated, but that's far from pioneering it and was not really GPGPU, it mostly used DX and OpenGL for doing math. By the time that was happening Nividia had already been working on GPGPU on their architecture for years as can be seen with the launch of G80 only few monts after the introduction of X1900.



That for sure is a good thing. My comments were just regarding how funny it is that after so many years of AMD promoting VLIW and telling everyone and dog that VLIW was the way to go and a much better approach. Even downplaying and mocking Fermi, well they are going to do the same thing Nvidia has been doing for years.

I already predicted this change in direction a few years ago anyway. When Fusion was frst promoted I knew they would eventually move into this direction and I also predcted that Fusion would represent a turning point in how aggressively would AMD promote GPGPU. And that's been the case. I have no love (neither hate) for AMD for this simple reason. I understand they are the underdog, and need some marketing on their side too, but they always sell themselves as the good company, but do nothing but downplay other's strategies until they are able to follow them and they do unltimately follow them. Just a few months ago (HD6000 introduction) VLIW was the only way to go, almost literally the godsend, while Fermi was mocked up as the wrong way to go. I knew it was all marketing BS, and now it's demostrated, but I guess people have short memories so it works for them. Oh well all these fancy new features are NOW the way to go. And it's true, except there's nothing new on them...

The reason they are changing is not because of the GPGPU issue its more on the scaling issue

Theoretical -> Realistic
Performance didn't scale correctly



It's all over the place, well scaling is a GPGPU issue but this architecture will at least allow for better scaling ^
 
Last edited:
Joined
May 23, 2008
Messages
376 (0.06/day)
Location
South Jersey
Wow how times are turning backwards.

I got me a new math co-processor!
 

Sapientwolf

New Member
Joined
Aug 23, 2006
Messages
57 (0.01/day)
Processor Intel Core 2 Quad QX9770 Yorkfield 4.00GHz
Motherboard Asus P5E3 Deluxe/WiFi-AP X38 Chipset Motherboard
Cooling Cooler Master Hyper 212 CPU Heatsink| Fans: Intake 1x120mm and 2x140mm| Exhaust 1x120mm and 2x140mm
Memory 4GB OCZ Platinum DDR3 1600 7-7-7-26
Video Card(s) 2 x Diamond Multimedia HD 4870 512MB Graphics Cards in CrossfireX
Storage 2 Western Digital 500GB 32MB Cache Caviar Blacks in RAID 0| 1 500GB 32MB Cache Seagate Barracuda.
Display(s) Sceptre X24WG 24" 1920x1200 4000:1 2ms LCD Monitor
Case Cooler Master CM 690
Audio Device(s) HT Omega HT Claro+
Power Supply Aerocool 750W Horsepower PSU
Software Windows Vista Home Premium x64
"Full GPU support of C, C++ and other high-level languages"

i know that the GPU is way faster than the CPU,
so does this mean that GPU will replace the CPU in common tasks also??

The GPU is faster than the CPU at arithmetic operations that can occur in parallel (Like video and graphics). The CPU is much faster at sequential logic. The CPU has been tailored toward its area and the GPU to its own a well. However now we see the gray area between the two increasing more and more. So AMD is working hard to make platforms in which the CPU can offload highly parallel arithmetic loads to their GPUS, and make it easier for programmers to program their GPUs outside the realm of DirectX and OpenGL.

One will not replace the other, they will merge and instructions will be exectuted on the hardware best for the job.
 
Joined
Apr 21, 2008
Messages
5,250 (0.87/day)
Location
IRAQ-Baghdad
System Name MASTER
Processor Core i7 3930k run at 4.4ghz
Motherboard Asus Rampage IV extreme
Cooling Corsair H100i
Memory 4x4G kingston hyperx beast 2400mhz
Video Card(s) 2X EVGA GTX680
Storage 2X Crusial M4 256g raid0, 1TbWD g, 2x500 WD B
Display(s) Samsung 27' 1080P LED 3D monitior 2ms
Case CoolerMaster Chosmos II
Audio Device(s) Creative sound blaster X-FI Titanum champion,Creative speakers 7.1 T7900
Power Supply Corsair 1200i, Logitch G500 Mouse, headset Corsair vengeance 1500
Software Win7 64bit Ultimate
Benchmark Scores 3d mark 2011: testing
So they point to big improve in performance and only benchmarks can prove it.
 
Joined
Jun 3, 2008
Messages
231 (0.04/day)
System Name Uh, my build?
Processor Intel Core i7 3770k 3.5GHz (3.9GHz turbo)
Motherboard Gigabyte Z77X-UD5H (F8 BIOS)
Cooling Coolermaster Hyper 212 Evo
Memory G.Skill 8GB DDR3 1600MHz CL9
Video Card(s) Gigabyte Radeon HD7970 3GB 1GHz Core/5.5GHz Memory
Storage SanDisk Extreme Pro 960GB & 2TB WD Black & 1TB WD Green
Display(s) 1x Samsung 23" Syncmaster P2350 1x LG 23"
Case Coolermaster HAF X
Audio Device(s) Onboard now since store didn't RMA properly
Power Supply Corsair HX 850W
Software Win 10 Pro 64bit
Benchmark Scores 3DMark 11 - P8456 - http://3dmark.com/3dm11/3372758
So they point to big improve in performance and only benchmarks can prove it.

Well probably not only benchmarks. You will see a decrease in the time it takes to process certain things. Similar in example to how decoding and recoding can be done by the GPU in certain programs.
 

Thatguy

New Member
Joined
Nov 24, 2010
Messages
666 (0.13/day)
The GPU is faster than the CPU at arithmetic operations that can occur in parallel (Like video and graphics). The CPU is much faster at sequential logic. The CPU has been tailored toward its area and the GPU to its own a well. However now we see the gray area between the two increasing more and more. So AMD is working hard to make platforms in which the CPU can offload highly parallel arithmetic loads to their GPUS, and make it easier for programmers to program their GPUs outside the realm of DirectX and OpenGL.

One will not replace the other, they will merge and instructions will be exectuted on the hardware best for the job.

the decoder will handle this job more then likely.
 
Joined
Mar 11, 2010
Messages
120 (0.02/day)
Location
El Salvador
System Name Jaguar X
Processor AMD Ryzen 7 7700X
Motherboard ASUS ROG Strix X670E-E Gaming WiFi
Cooling Corsair H150 RGB
Memory 2x 16GB Corsair Vengeance DDR5-6000
Video Card(s) Gigabyte RTX 4080 Gaming OC
Storage 1TB Kingston KC3000 + 1TB Samsung 970 EVO Plus
Display(s) LG C1
Case Cougar Panzer EVO RGB
Power Supply XPG Core Reactor 850W
Mouse Cougar Minos XT
Keyboard Cougar Ultimus RGB
Software Windows 11 Pro
The future is fusion, remember? CPU and GPU becoming one, it's going to happen, I believe it, but are we gonna really "own" it?

These days cloud computing is starting to make some noice and it makes sense, average guys/gals are not interested in FLOPS performance, they just want to listen to their music, check facebook and play some fun games. What I'm saying is that in the future we'll only need a big touch screen with a mediocre ARM processor to play Crysis V. The processing, you know GPGPU, the heat and stuff, will be somewhere in China, we'll be given just what we need, the final product through a huge broadband. If you've seen the movie Wall-E, think about living in that spaceship Axiom, it'd be something like that... creepy, eh?
 

Wile E

Power User
Joined
Oct 1, 2006
Messages
24,318 (3.67/day)
System Name The ClusterF**k
Processor 980X @ 4Ghz
Motherboard Gigabyte GA-EX58-UD5 BIOS F12
Cooling MCR-320, DDC-1 pump w/Bitspower res top (1/2" fittings), Koolance CPU-360
Memory 3x2GB Mushkin Redlines 1600Mhz 6-8-6-24 1T
Video Card(s) Evga GTX 580
Storage Corsair Neutron GTX 240GB, 2xSeagate 320GB RAID0; 2xSeagate 3TB; 2xSamsung 2TB; Samsung 1.5TB
Display(s) HP LP2475w 24" 1920x1200 IPS
Case Technofront Bench Station
Audio Device(s) Auzentech X-Fi Forte into Onkyo SR606 and Polk TSi200's + RM6750
Power Supply ENERMAX Galaxy EVO EGX1250EWT 1250W
Software Win7 Ultimate N x64, OSX 10.8.4
You know, AMD has always had these great gpu hardware features (the ability to crunch numbers, like in physics), then promises us great software to run on it (GPU accelerated Havok anyone?), but the software never materializes.

I'll get excited about this when it is actually being implemented by devs in products I can use.
 
Last edited:
Joined
Jul 10, 2010
Messages
1,233 (0.23/day)
Location
USA, Arizona
System Name SolarwindMobile
Processor AMD FX-9800P RADEON R7, 12 COMPUTE CORES 4C+8G
Motherboard Acer Wasp_BR
Cooling It's Copper.
Memory 2 x 8GB SK Hynix/HMA41GS6AFR8N-TF
Video Card(s) ATI/AMD Radeon R7 Series (Bristol Ridge FP4) [ACER]
Storage TOSHIBA MQ01ABD100 1TB + KINGSTON RBU-SNS8152S3128GG2 128 GB
Display(s) ViewSonic XG2401 SERIES
Case Acer Aspire E5-553G
Audio Device(s) Realtek ALC255
Power Supply PANASONIC AS16A5K
Mouse SteelSeries Rival
Keyboard Ducky Channel Shine 3
Software Windows 10 Home 64-bit (Version 1607, Build 14393.969)
Well by 2013

The APU
with

Enhanced Bulldozer + Graphic Core Next

Will be perfect unison

and with

2013
FX+AMD Radeon 9900 series
Next-Gen Bulldozer + Next-Gen Graphic Core Next

and DDR4+PCI-e 3.0 will equal MAXIMUM POWUH!!!

:rockout::rockout::rockout: :rockout::rockout::rockout: :rockout::rockout::rockout:
 
Joined
Jan 2, 2009
Messages
9,899 (1.70/day)
Location
Essex, England
System Name My pc
Processor Ryzen 5 3600
Motherboard Asus Rog b450-f
Cooling Cooler master 120mm aio
Memory 16gb ddr4 3200mhz
Video Card(s) MSI Ventus 3x 3070
Storage 2tb intel nvme and 2tb generic ssd
Display(s) Generic dell 1080p overclocked to 75hz
Case Phanteks enthoo
Power Supply 650w of borderline fire hazard
Mouse Some wierd Chinese vertical mouse
Keyboard Generic mechanical keyboard
Software Windows ten
You know, AMD has always had these great gpu hardware features (the ability to crunch numbers, like in physics), then promises us great software to run on it (GPU accelerated Havok anyone?), but the software never materializes.

I'll get excited about this when it is actually being implemented by devs in products I can use.

I know it's only one thing, but furture mark 11 does soft body simulation on the GPU on AMD cards and Nvidia cards.

Only one thing, but it does point to things to come I think.
 

W1zzard

Administrator
Staff member
Joined
May 14, 2004
Messages
27,839 (3.71/day)
Processor Ryzen 7 5700X
Memory 48 GB
Video Card(s) RTX 4080
Storage 2x HDD RAID 1, 3x M.2 NVMe
Display(s) 30" 2560x1600 + 19" 1280x1024
Software Windows 10 64-bit
You know, AMD has always had these great gpu hardware features (the ability to crunch numbers, like in physics), then promises us great software to run on it (GPU accelerated Havok anyone?), but the software never materializes.

yup .. i dont think amd has successfully brought any software feature to market.. maybe x86_64 if you count intel adopting it

that in order for these big chips to be viable, you need the consumer market in order to have some volume and refine the process, bin chips, etc. Even if it's an small market like the enthusiast GPU market, with less than 1 million cards sold, that's far more than the 10's of thousands HPC cards you can sell. At least for now. Maybe in some years, with more demand, it would make sense to create a different chip for HPC, but then again the industry is moving in the opposite direction, and I think it's the right direction.

i agree, but why does amd waste their money with useless computation features that apparently have nowhere to go other than video encode and some hpc apps ?
if there was some killer application for gpu computing wouldn't nvidia/cuda have found it by now?
 
Joined
Oct 2, 2004
Messages
13,791 (1.87/day)
And even that hyped video encoding is mostly done on CPU which makes it utterly useless as it's not much faster than pure CPU anyway. They were bragging about physics as well but they never made them. Not with Havok, not with Bullet, not with anything. I mean if you want to make something, endorse it, embrace it, give developers a reason to develop and build on that feature.
In stead they announce it, brag about it and then we can all forget about it as it'll never happen.
They should invest those resources into more productive things instead of wasting them on such useless stuff.

Only thing that they pulled off properly is MLAA which uses shaders to process screen and anti-alias it. It functions great in pretty much 99,9% of games, is what they promised and i hope they won't remove it like they did with most of their features (Temporal AA, TruForm, SmartShaders etc). Sure some technologies got redundant like TruForm, but others just died because AMD didn't bother to support them. SmartShaders were good example. HDRish was awesome, giving old games a fake HDR effect which looked pretty good. But it worked only in OpenGL and someone else had to make it. AMD never added anything useful for D3D which is what most of the games use. So what's the point!?!?! They should really get their stuff together and stop wasting time and resources on useless stuff and start making cool features that can last. Like again, MLAA.
 
Joined
May 31, 2005
Messages
284 (0.04/day)
ATI's support of GPGPU hasn't been as great as some say here. OpenCL support only goes back to HD4000 because older chips have limitations that make it basically infeasible. In other words HD3000 and 2000 are very poor GPGPU chips. X1900 isn't really even worth mentioning.

You can on the other hand run CUDA on old G80. NV has definitely been pushing GPGPU harder.

On the other, other hand however I can't say that GPGPU affects me whatsoever. I think AMD is mostly after that Tesla market and Photoshop filters. I won't be surprised if this architecture is less efficient for graphics. I sense a definite divergence from just making beefier graphics accelerators. NV's chips have proven with their size that GPGPU features don't really mesh with graphics speed.
 
Last edited:

Thatguy

New Member
Joined
Nov 24, 2010
Messages
666 (0.13/day)
The future is fusion, remember? CPU and GPU becoming one, it's going to happen, I believe it, but are we gonna really "own" it?

These days cloud computing is starting to make some noice and it makes sense, average guys/gals are not interested in FLOPS performance, they just want to listen to their music, check facebook and play some fun games. What I'm saying is that in the future we'll only need a big touch screen with a mediocre ARM processor to play Crysis V. The processing, you know GPGPU, the heat and stuff, will be somewhere in China, we'll be given just what we need, the final product through a huge broadband. If you've seen the movie Wall-E, think about living in that spaceship Axiom, it'd be something like that... creepy, eh?


even at light speed the latencys will kill ya, there is no way around client power, resist the cloud, its bullshit anyways.
 

Thatguy

New Member
Joined
Nov 24, 2010
Messages
666 (0.13/day)
yup .. i dont think amd has successfully brought any software feature to market.. maybe x86_64 if you count intel adopting it



i agree, but why does amd waste their money with useless computation features that apparently have nowhere to go other than video encode and some hpc apps ?
if there was some killer application for gpu computing wouldn't nvidia/cuda have found it by now?

Becuase soon enough the hardware will do the work anyways. Its not always about software. As to Nvidia, they painted themselves into a corner years ago.
 

Thatguy

New Member
Joined
Nov 24, 2010
Messages
666 (0.13/day)
And even that hyped video encoding is mostly done on CPU which makes it utterly useless as it's not much faster than pure CPU anyway. They were bragging about physics as well but they never made them. Not with Havok, not with Bullet, not with anything. I mean if you want to make something, endorse it, embrace it, give developers a reason to develop and build on that feature.
In stead they announce it, brag about it and then we can all forget about it as it'll never happen.
They should invest those resources into more productive things instead of wasting them on such useless stuff.

Only thing that they pulled off properly is MLAA which uses shaders to process screen and anti-alias it. It functions great in pretty much 99,9% of games, is what they promised and i hope they won't remove it like they did with most of their features (Temporal AA, TruForm, SmartShaders etc). Sure some technologies got redundant like TruForm, but others just died because AMD didn't bother to support them. SmartShaders were good example. HDRish was awesome, giving old games a fake HDR effect which looked pretty good. But it worked only in OpenGL and someone else had to make it. AMD never added anything useful for D3D which is what most of the games use. So what's the point!?!?! They should really get their stuff together and stop wasting time and resources on useless stuff and start making cool features that can last. Like again, MLAA.


they should call D3D round about the bend, down the street, up the alley, over 2 blocks and in the ditch 3d. Becuase it sure as shit ain't direct. AMD will move away from directX, they see where the market is headed.
 

W1zzard

Administrator
Staff member
Joined
May 14, 2004
Messages
27,839 (3.71/day)
Processor Ryzen 7 5700X
Memory 48 GB
Video Card(s) RTX 4080
Storage 2x HDD RAID 1, 3x M.2 NVMe
Display(s) 30" 2560x1600 + 19" 1280x1024
Software Windows 10 64-bit
they should call D3D round about the bend, down the street, up the alley, over 2 blocks and in the ditch 3d. Becuase it sure as shit ain't direct. AMD will move away from directX, they see where the market is headed.

the market is headed toward console games that are directx (xbox360) and that get recompiled with a few clicks for pc to maximize developer $$
 
Joined
Oct 2, 2004
Messages
13,791 (1.87/day)
Exactly. If they will try to invent something new and not push it enough like they never really did for anything, they are just plain stupid. DirectX is the way to go at the moment, mostly because of what W1z said. Profit.
 
Top