• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

Mantle API presentation by AMD, DICE and Oxide - AMD Summit 2013

Joined
Nov 9, 2010
Messages
5,691 (1.10/day)
System Name Space Station
Processor Intel 13700K
Motherboard ASRock Z790 PG Riptide
Cooling Arctic Liquid Freezer II 420
Memory Corsair Vengeance 6400 2x16GB @ CL34
Video Card(s) PNY RTX 4080
Storage SSDs - Nextorage 4TB, Samsung EVO 970 500GB, Plextor M5Pro 128GB, HDDs - WD Black 6TB, 2x 1TB
Display(s) LG C3 OLED 42"
Case Corsair 7000D Airflow
Audio Device(s) Yamaha RX-V371
Power Supply SeaSonic Vertex 1200w Gold
Mouse Razer Basilisk V3
Keyboard Bloody B840-LK
Software Windows 11 Pro 23H2

Great Q&A session. One thing I was forgetting in talking earlier about the steep initial learning curve, is that the debugging built into Mantle could be a real time saver. I was eagerly awaiting what was said at the 29:00 mark, but they finally remarked on that aspect, and appropriately to a small developer. I'm sure it will even help the bigger teams with more timely, polished products at launch. They talked about that being the "beefy" part of the API, but if you're going to weigh it down, that's one of the best ways to do it.
 
Joined
Aug 17, 2009
Messages
2,558 (0.45/day)
Location
United States
System Name Aluminum Mallard
Processor Ryzen 1900x
Motherboard AsRock Phantom 6
Cooling AIO
Memory 32GB
Video Card(s) EVGA 3080Ti FTW
Storage SSD
Display(s) Benq Zowie
Case Cosmos 1000
Audio Device(s) On Board
Power Supply Corsair CX750
VR HMD HTV Vive, Valve Index
Software Arch Linux
Benchmark Scores 31 FPS in Dalaran
How is it open if it runs only on Radeons? Am I missing something or it works only on GCN arch?



Maybe wishful thinking on the slide, but even if it's only for GCN cards initially, AMD will be killing the API by making it proprietary.
 

HTC

Joined
Apr 1, 2008
Messages
4,668 (0.76/day)
Location
Portugal
System Name HTC's System
Processor Ryzen 5 5800X3D
Motherboard Asrock Taichi X370
Cooling NH-C14, with the AM4 mounting kit
Memory G.Skill Kit 16GB DDR4 F4 - 3200 C16D - 16 GTZB
Video Card(s) Sapphire Pulse 6600 8 GB
Storage 1 Samsung NVMe 960 EVO 250 GB + 1 3.5" Seagate IronWolf Pro 6TB 7200RPM 256MB SATA III
Display(s) LG 27UD58
Case Fractal Design Define R6 USB-C
Audio Device(s) Onboard
Power Supply Corsair TX 850M 80+ Gold
Mouse Razer Deathadder Elite
Software Ubuntu 20.04.6 LTS
http://www.guru3d.com/index.php?ct=news&action=file&id=4807

Maybe wishful thinking on the slide, but even if it's only for GCN cards initially, AMD will be killing the API by making it proprietary.

Doesn't it state it should work on other graphic API vendors as well? :confused:


Personally, i think this is way over optimistic but i AM convinced this is the way forward because, even if this doesn't end up bringing the amount extra performance they claim, i'm sure it will bring some extra performance (10%-15% maybe??) from current graphic cards.

Also, if this new tech does work with other graphic API vendors, then it means it isn't tied to CGN tech, which means that maybe previous AMD cards may support this too: explaining which exactly support it and which don't would be helpful, and the same for other graphic API vendors too, IMO. Obviously, it should work best in CGN tech, though.

Will have to see 2 identical systems running identical benches with one using Mantle to see the difference and, until then, i'm quite skeptical.
 
Joined
Oct 2, 2004
Messages
13,791 (1.86/day)
If all goes well, Mantle could go it's own path next to D3D and OGL. Meaning it will be much higher performance than OGL, just as platform independent as OGL with faster evolving where D3D seems to be stagnating for quite some time now, plus it's limited to PC's and Xbox only.

I sure hope it will be a success, because we could all benefit from it, from developers up to us, the gamers.
 

Frick

Fishfaced Nincompoop
Joined
Feb 27, 2006
Messages
19,781 (2.86/day)
Location
w
System Name Black MC in Tokyo
Processor Ryzen 5 7600
Motherboard MSI X670E Gaming Plus Wifi
Cooling Be Quiet! Pure Rock 2
Memory 2 x 16GB Corsair Vengeance @ 6000Mhz
Video Card(s) XFX 6950XT Speedster MERC 319
Storage Kingston KC3000 1TB | WD Black SN750 2TB |WD Blue 1TB x 2 | Toshiba P300 2TB | Seagate Expansion 8TB
Display(s) Samsung U32J590U 4K + BenQ GL2450HT 1080p
Case Fractal Design Define R4
Audio Device(s) Plantronics 5220, Nektar SE61 keyboard
Power Supply Corsair RM850x v3
Mouse Logitech G602
Keyboard Dell SK3205
Software Windows 10 Pro
Benchmark Scores Rimworld 4K ready!
I don't buy it. AMD started drumming up the "down to the metal" parade a few years back and several companies immediatly said they were wrong, including Crytek (they know a thing or two about high level PC Graphics). They have said many times (and many ways) that what AMD was advocating was nice in theory, but wasn't feasible.

That was then, now is now, and AMD should now something about that sort of thing too.
 
Joined
Mar 24, 2011
Messages
2,356 (0.47/day)
Location
VT
Processor Intel i7-10700k
Motherboard Gigabyte Aurorus Ultra z490
Cooling Corsair H100i RGB
Memory 32GB (4x8GB) Corsair Vengeance DDR4-3200MHz
Video Card(s) MSI Gaming Trio X 3070 LHR
Display(s) ASUS MG278Q / AOC G2590FX
Case Corsair X4000 iCue
Audio Device(s) Onboard
Power Supply Corsair RM650x 650W Fully Modular
Software Windows 10
That was then, now is now, and AMD should now something about that sort of thing too.

They know Hardware. They have always been mediocre when it came to software support. I refuse to believe a company that couldn't get drivers working adequately for the 4-5 years I used their products can all of a sudden release an API that revolutionizes PC Game Development to the extent they are claiming. We've still heard no specifics other than "removes overhead" and "increases performance astronomical%". I still haven't seen a single video of the technology in question at work--it's supposed to go live next month and I've seen nothing to back up these claims. You would think with AMD having released new GPU's they would be showcasing this thing left and right to try and get their products sold going into the holiday season.
 

TheMailMan78

Big Member
Joined
Jun 3, 2007
Messages
22,599 (3.50/day)
Location
'Merica. The Great SOUTH!
System Name TheMailbox 5.0 / The Mailbox 4.5
Processor RYZEN 1700X / Intel i7 2600k @ 4.2GHz
Motherboard Fatal1ty X370 Gaming K4 / Gigabyte Z77X-UP5 TH Intel LGA 1155
Cooling MasterLiquid PRO 280 / Scythe Katana 4
Memory ADATA RGB 16GB DDR4 2666 16-16-16-39 / G.SKILL Sniper Series 16GB DDR3 1866: 9-9-9-24
Video Card(s) MSI 1080 "Duke" with 8Gb of RAM. Boost Clock 1847 MHz / ASUS 780ti
Storage 256Gb M4 SSD / 128Gb Agelity 4 SSD , 500Gb WD (7200)
Display(s) LG 29" Class 21:9 UltraWide® IPS LED Monitor 2560 x 1080 / Dell 27"
Case Cooler Master MASTERBOX 5t / Cooler Master 922 HAF
Audio Device(s) Realtek ALC1220 Audio Codec / SupremeFX X-Fi with Bose Companion 2 speakers.
Power Supply Seasonic FOCUS Plus Series SSR-750PX 750W Platinum / SeaSonic X Series X650 Gold
Mouse SteelSeries Sensei (RAW) / Logitech G5
Keyboard Razer BlackWidow / Logitech (Unknown)
Software Windows 10 Pro (64-bit)
Benchmark Scores Benching is for bitches.
Love how people are name dropping DICE as a validation of Mantel when BF4 is a "Gaming Evolved" title.
 

Frick

Fishfaced Nincompoop
Joined
Feb 27, 2006
Messages
19,781 (2.86/day)
Location
w
System Name Black MC in Tokyo
Processor Ryzen 5 7600
Motherboard MSI X670E Gaming Plus Wifi
Cooling Be Quiet! Pure Rock 2
Memory 2 x 16GB Corsair Vengeance @ 6000Mhz
Video Card(s) XFX 6950XT Speedster MERC 319
Storage Kingston KC3000 1TB | WD Black SN750 2TB |WD Blue 1TB x 2 | Toshiba P300 2TB | Seagate Expansion 8TB
Display(s) Samsung U32J590U 4K + BenQ GL2450HT 1080p
Case Fractal Design Define R4
Audio Device(s) Plantronics 5220, Nektar SE61 keyboard
Power Supply Corsair RM850x v3
Mouse Logitech G602
Keyboard Dell SK3205
Software Windows 10 Pro
Benchmark Scores Rimworld 4K ready!
 
Last edited by a moderator:
Joined
Nov 9, 2010
Messages
5,691 (1.10/day)
System Name Space Station
Processor Intel 13700K
Motherboard ASRock Z790 PG Riptide
Cooling Arctic Liquid Freezer II 420
Memory Corsair Vengeance 6400 2x16GB @ CL34
Video Card(s) PNY RTX 4080
Storage SSDs - Nextorage 4TB, Samsung EVO 970 500GB, Plextor M5Pro 128GB, HDDs - WD Black 6TB, 2x 1TB
Display(s) LG C3 OLED 42"
Case Corsair 7000D Airflow
Audio Device(s) Yamaha RX-V371
Power Supply SeaSonic Vertex 1200w Gold
Mouse Razer Basilisk V3
Keyboard Bloody B840-LK
Software Windows 11 Pro 23H2
That was then, now is now, and AMD should now something about that sort of thing too.

One could also say CryTek themselves have never been that great at optimizing games. Their philosophy is not unlike Nvidias', "Let's make games that melt PCs just to say we care about graphics", while Nvidia facilitates them with GPU features that are high resource.

It's time for some concerned game and driver devs to pull us out of the stone age the MS has kept us in for years.
 
Last edited by a moderator:
Joined
Feb 18, 2009
Messages
1,825 (0.31/day)
Location
Slovenia
System Name Multiple - Win7, Win10, Kubuntu
Processor Intel Core i7 3820 OC@ 4.0 GHz
Motherboard Asus P9X79
Cooling Noctua NH-L12
Memory Corsair Vengeance 32GB 1333MHz
Video Card(s) Sapphire ATI Radeon RX 480 8GB
Storage Samsung SSD: 970 EVO 1TB, 2x870 EVO 250GB,860 Evo 250GB,850 Evo 250GB, WD 4x1TB, 2x2TB, 4x4TB
Display(s) Asus PB328Q 32' 1440p@75hz
Case Cooler Master CM Storm Trooper
Power Supply Corsair HX750, HX550, Galaxy 520W
Mouse Multiple, Razer Mamba Elite, Logitech M500
Keyboard Multiple - Lenovo, HP, Dell, Logitech
Now I found some developer-esque sites with probably more accurate and well founded opinions, here are some of the notable quites across several sites:



Runtime Compilation:
The way that it currently works with D3D11 is that you compile your shaders to D3D assembly, which is basically a hardware-agnostic "virtual" ISA. In order to run these shaders on a GPU, the driver needs to compile the D3D assembly into its native ISA. Since developers can't do this conversion ahead time, the driver has to do a JIT compile when the game loads its shaders. This makes the game take longer to load, and the driver doesn't have a lot of time to try aggressive optimizations. With a hardware-specific API you can instead compile your shaders directly into the hardware's ISA, and avoid the JIT compile entirely.

As for patching, the driver may need to patch shaders in order to support certain functionality available in D3D. As an example, let's say that a hypothetical GPU actually performs its depth test in the pixel shader instead of having extra hardware to do it.. This would mean that the driver would have to look at depth state is currently bound to the context when a draw call is issued, and patch the shader to use the correct depth-testing code. With a hardware-specific shader compiler you can instead just provide the ability to perform the depth test in the pixel shader, and totally remove the concept of depth states.


As for occlusion queries, the main problem with them in D3D/GL is that the data can only be read by the CPU but the data is actually generated by the GPU. The GPU typically lags behind the CPU by a frame or more so that the CPU has enough time to generate commands for the GPU to consume, which means if the CPU wants to read back GPU results they won't be ready until quite a bit of time after it issued the commands. In practice this generally requires having the CPU wait at least a frame for query results. This means you can't really effectively use it for something like occlusion culling, since by the time the data is usable it's too late.


what looks terribly interesting is the possibility, with mantle, to have a desktop Kaveri and a 2xx and see your game i.e. benefit from iGPU due to the i.e. GPGPU work executed inside the iGPU...
Something that would REALLY make gamers prefer AMD CPUs over any Intel one (as long as the CPU will be 'fast enough', but with more gruntwork/sound moved to the front ends, it might).
On gaming notebook, that would *really* makes the difference.


I see two other slides about DX/GL Parallelism and Parallel Dispath with Mantle... but many years NV has this features in DX11 too.
AMD's claim is Mantle's parallel dispatch is better than DX's implementation.
Because all DX11 AMD GPUs do not support Command Lists and Multi-threaded Rendering features from DX11. NV supports it.
And there's very little gains. Johan has previous talks at GDC and the like about it.

NVAPI is different, it is a driver interface framework. It doesn't replace OpenGL or DX.

EDIT: I think this is same thing from ATI.
http://developer.amd.com/tools-and-sdks/graphics-development/amd-gpu-services-ags-library/


I think the multi-GPU benefits could be astounding for that market. For so long it's always been AFR, doing tricks to smooth timing issues, not able to use shared resources etc. The ability to allocate specific rendering tasks to either GPU or even onboard APU combined with discrete is fantastic.
Exactly .... the people who say that Mantle will have more of an effect on low-end markets but not on High-end markets, are heavily mistaking.





--------------------------------------------

Also, here's a funny exchange between some guys, the first guy is referring to watching this Q&A video:


After absorbing all the info I'm quite taken back by the potential for PC gaming that's on offer not just from mantle on a dedicated graphics card but also on a PC based on an APU, but It begs one question. WTF have the MS directX team been doing for the last 10 years?

Killing PC gaming in favor of consoles and charging PC enthusiasts a premium for everything it seems.
Do not forget Nvidia's part in all of this where they had a slight advantage in games that secretly adopted NVAPI and made them come first in benchmarks despite having inferior Hardware in many GPU comparisons.
Thank God for AMD and Chris Roberts for helping stick it to them.:rockout:

---------------
Next stop is to make it game-agnostic and fully implemented in the API.
And then world domination with 100% performance boost between all kinds of GCN GPUs: two different discrete cards + iGPU.


At one point, I kind of like the arrogance of Nvidia, which allowed AMD to hop on this and makes AMD rise up, makes a good story, and balancing the industry, not one controlling everything.





Great Q&A session. One thing I was forgetting in talking earlier about the steep initial learning curve, is that the debugging built into Mantle could be a real time saver. I was eagerly awaiting what was said at the 29:00 mark, but they finally remarked on that aspect, and appropriately to a small developer. I'm sure it will even help the bigger teams with more timely, polished products at launch. They talked about that being the "beefy" part of the API, but if you're going to weigh it down, that's one of the best ways to do it.

Exactly, the bigger teams will put a lot into perfecting the launch so those first-day techincal difficulties will be practically non-existent or rare. Because they are in control, most of the mistakes and random driver crashes can be avoided completely, not just the ability to fix them if they're found, but with Mantle, you would just AVOID the possibility of them in the first place, and this is big big benefit to development as well as consumers, win win!




Also, if this new tech does work with other graphic API vendors, then it means it isn't tied to CGN tech, which means that maybe previous AMD cards may support this too: explaining which exactly support it and which don't would be helpful, and the same for other graphic API vendors too, IMO. Obviously, it should work best in CGN tech, though.

The API will ofcourse not work on Nvidia out of the box. Nvidia would have to code support for it, and they can choose to delay this for next generation, AMD could dominate the high-end and potentially all other ends if the laptop-gamers and avreage PC gamers would play those Mantle-supported games in large numbers; for like 2 generations.

They're smart, they know if they would go proprietary, Nvidia could answer with another API, the little boost AMD would get would come down back on them because half of the developers would focus on other API if nvidia made better GPUs physically, it would add mess to the industry, it wouldn't make any financial advantage for AMD in the long run, so they get to have a nice advantage for about 1 or 2 years before nvidia jumps on Mantle. Nvidia may be arrogant and delay this, until Mantle would take steam and suddenly the green team would be in panic mode and ... we could see puppies again.


Will have to see 2 identical systems running identical benches with one using Mantle to see the difference and, until then, i'm quite skeptical.

But remember, just as Erocker said, Mantle is a proven thing (consoles) unlike UDT which is something on top of existing APIs, this is a paradigm shift at the core, it's not something built ontop of the existing core, I don't know how stupid can someone be to try to compare these 2 things, and I'm not saying BF4 mantle update will make 100% increase, ofcourse not, but with time, this is totally not an unrealistic number, in the Q&A they even talked about numbers like 2-3 times (if you don't change settings, controlled benchmark), but they wanted to stay vague at this point.



If all goes well, Mantle could go it's own path next to D3D and OGL. Meaning it will be much higher performance than OGL, just as platform independent as OGL with faster evolving where D3D seems to be stagnating for quite some time now, plus it's limited to PC's and Xbox only.

DX is also limited to the OS, remember ;)


Every man on this planet with a few grams of brains can realize DirectX is a joke. This is just amazing to what lengths some people go to defend it.


I'm just sometimes very sick looking at all the webshit around the nets, just right now I've read some absolute asinine garbage, I don't know what's going in people's heads, these probably are some heavily college indoctrinated developers who think that Microsoft is the "governing body" we should all be revolving around - And not even related to any tech talk, Microsoft is a crappy company by it self if you ask me, all the NSA connections, all the capitalist arrogance, classic globalist company connected to shady stuff, who the heck is going to rely on such a trainwreck for innovation in the free markets, give me a god damn break.



-------------------------------
Here is AMD Driver Guy saying "we can do only mediocre solutions" - that's exactly what driver updates always were.
http://youtu.be/sSY2KXBoro0?t=12m30s
 
Last edited:
Joined
Nov 9, 2010
Messages
5,691 (1.10/day)
System Name Space Station
Processor Intel 13700K
Motherboard ASRock Z790 PG Riptide
Cooling Arctic Liquid Freezer II 420
Memory Corsair Vengeance 6400 2x16GB @ CL34
Video Card(s) PNY RTX 4080
Storage SSDs - Nextorage 4TB, Samsung EVO 970 500GB, Plextor M5Pro 128GB, HDDs - WD Black 6TB, 2x 1TB
Display(s) LG C3 OLED 42"
Case Corsair 7000D Airflow
Audio Device(s) Yamaha RX-V371
Power Supply SeaSonic Vertex 1200w Gold
Mouse Razer Basilisk V3
Keyboard Bloody B840-LK
Software Windows 11 Pro 23H2
Exactly, the bigger teams will put a lot into perfecting the launch so those first-day techincal difficulties will be practically non-existent or rare. Because they are in control, most of the mistakes and random driver crashes can be avoided completely, not just the ability to fix them if they're found, but with Mantle, you would just AVOID the possibility of them in the first place, and this is big big benefit to development as well as consumers, win win!

Yeah it should help avoid the dreaded "surprises" and improve optimization. They'll literally be able to push the envelope within the req spec, while offering a smooth, bug free gameplay experience. It should even help them immediately see problems like performance not scaling with lowered settings, a problem many games have.

This kind of API has been long overdo. If this goes as well as expected, those getting in the way of it are going to look awfully silly unless they come up with something equal or better.
 
Joined
Nov 4, 2005
Messages
12,072 (1.72/day)
System Name Compy 386
Processor 7800X3D
Motherboard Asus
Cooling Air for now.....
Memory 64 GB DDR5 6400Mhz
Video Card(s) 7900XTX 310 Merc
Storage Samsung 990 2TB, 2 SP 2TB SSDs, 24TB Enterprise drives
Display(s) 55" Samsung 4K HDR
Audio Device(s) ATI HDMI
Mouse Logitech MX518
Keyboard Razer
Software A lot.
Benchmark Scores Its fast. Enough.
I don't buy it. AMD started drumming up the "down to the metal" parade a few years back and several companies immediatly said they were wrong, including Crytek (they know a thing or two about high level PC Graphics). They have said many times (and many ways) that what AMD was advocating was nice in theory, but wasn't feasible.

I'm also ashamed that not one person while mentioning the bloat of DirectX mentioned that it primarily applies to DirectX 9.0c and earlier--but not DirectX 10/11. With DirectX 10 they completely gutted the API and reworked it to be more efficient. But you know what the huge problem was? Nobody made games for DirectX 11 because the last generation of consoles were DX9 only. It's like pointing out that games don't optimize for the large amount of RAM these days, when it's mostly due to the fact that despite most people having x86-64 compatible CPU's nobody makes games that support that. Hell, if game devs were really so hindered by the evils of DirectX, how come nobody is using OpenGL? Sure Sony does a bit and iD, but what about Crytek, and DICE, and Rockstar? The reason is because DirectX isn't the huge bucket of crap it's being made out as.

How has nobody pointed out his absurd notion that somehow a software API is converting heat and energy into processing power? That just doesn't happen (well, in Graphene it apparently does but these are Silicon-based). If that were the case someone would have already done it back in the 90's or even earlier when they started focusing on things like heat-generation and power usage. I'll point out that in the world of engineering nothing is 100% efficient. You can get damn close, but there are always inhibiting factors.

Show me a video of side-by-side gameplay with a game running on DirectX and on Mantle, using the same hardware and same settings, and maybe we'll talk. Otherwise, it's all bullshit. In case you forgot AMD Marketting is really really good at that. Remember when Bulldozer was supposed to use less power than anything Intel offered and perform on par or 10-15%+ better than the highest end i7? I sure do, and I'll let you look up what actually happened because it was nothing of the sort. I'm not being duped by more marketting crap, and until AMD shows more than slides or some reps for companies they have "reimbursed" handily, I could care less about this tech--which by the way, won't be used for Xbox One or Playstation 4.

They know Hardware. They have always been mediocre when it came to software support. I refuse to believe a company that couldn't get drivers working adequately for the 4-5 years I used their products can all of a sudden release an API that revolutionizes PC Game Development to the extent they are claiming. We've still heard no specifics other than "removes overhead" and "increases performance astronomical%". I still haven't seen a single video of the technology in question at work--it's supposed to go live next month and I've seen nothing to back up these claims. You would think with AMD having released new GPU's they would be showcasing this thing left and right to try and get their products sold going into the holiday season.

To understand what Mantle does requires you to understand how anything appears on your screen from the data on the disk and user interaction currently.

As a 30,000 foot view it works like this.

Game thread spawned, requests OS setup memory pools, and other resource allocation, OS responsible for management of threads, memory paging, dedicated and shared resources. the DX API is responsible for creating specific and generic, hardware and software calls on system resources.


So a scene where 1000 polygons are needing rendered, and 50 textures are required in memory, our mouse interacts with the DX API which then hands input to the game thread, which then uses CPU resources to determine where and what to do in game, the game engine hands back out the list of polygons and skinning information to the DX API, which then checks with the system to see where these textures or skins are in memory using CPU resources for each polygon call, even if it gets excluded by the Z check, finally it hands the poly over to the graphics driver to render.

Each time a call is made it uses cycles on the CPU, and ties up time the system could be fetching or rendering objects. Currently it uses pipeline rendering, so that the GPU never stalls or stalls are avoided due to the massive overhead, but ever time the pipeline needs flushed or a resource isn't available we get lag spikes (frame render time increases). Driver optimizations sometimes are just forced flushing at specific points when they know the OS/DX API is going to cause a stall and they preemptively dump the pipe to facilitate the loading of the required resource. As of right now neither the driver or system thread can forcibly load resources without causing a BSOD. We are at the mercy of the DX API to behave and load resources, the game thread to request resources paged into RAM from disk, and the driver to keep the GPU busy with other tasks when a pipeline stalls.

New Heterogeneous hardware is DMA (Direct Memory Address) aware and capable, so the GPU could request the item needed directly with mantle replacing DX ans the go between to update the system on the location of resource and memory in use. If they do this on every call, and allow the GPU hardware to perform preemptive Z buffer work it reduces the poly count lets say to 800 and the overhead for fetches from the latency of four CPU cycles and a double read and single write, to a single read direct to L1 cache.


http://msdn.microsoft.com/en-us/library/windows/desktop/ff476882(v=vs.85).aspx


And a picture for those who cant the wread well.
 
Last edited:
Joined
Mar 24, 2011
Messages
2,356 (0.47/day)
Location
VT
Processor Intel i7-10700k
Motherboard Gigabyte Aurorus Ultra z490
Cooling Corsair H100i RGB
Memory 32GB (4x8GB) Corsair Vengeance DDR4-3200MHz
Video Card(s) MSI Gaming Trio X 3070 LHR
Display(s) ASUS MG278Q / AOC G2590FX
Case Corsair X4000 iCue
Audio Device(s) Onboard
Power Supply Corsair RM650x 650W Fully Modular
Software Windows 10
One could also say CryTek themselves have never been that great at optimizing games. Their philosophy is not unlike Nvidias', "Let's make games that melt PCs just to say we care about graphics", while Nvidia facilitates them with GPU features that are high resource.

They push hardware 100%, whether or not the average user is at the point where they can run the game at that level is irrelevant. How many other developers do that with such consistancy?

This kind of API has been long overdo.

Glide.

@Steevo

I understand most of that, but that doesn't change the fact that you're then asking way more of your GPU. You are no longer managing its tasks and are instead forcing it to manage its own tasks, as well as requiring it still talk to the OS continually (that much is unavoidable if you want to get it to play nice with your screen). If anything I think this is AMD's attempt to take CPU's out of the equation when it comes to gaming performance, they want everything to be GPU bound since they tend to compete infinitely better in the GPU market anyway.

It's hard to compare DirectX since as I said, there are almost 0 games that are built using DirecX 11 properly. Almost all games from the past 6-7 years were made as DX9 titles for the sake of consoles, then hastily ported to PC's either as is or poorly as DX10/11 ports. The few games that were built on DX11 run pretty damn well on it--Civ V sees a sizeable performance increase when using DX11 vs. DX9, as does World of Warcraft (not insanely demanding titles, but any performance increase is noteworthy) and at release BF3 was one of the first games to truly run DX11 and it looked and ran exceptionally (still does, although BF4 seems to run noticably worse for me).

We're also avoiding the possibility that AMD is being highly dishonest about how Mantle will work for other hardware vendors (Nvidia that is) in the same way that Nvidia is highly dishonest about how PhysX (which happens to be a low level API as well) works on AMD cards. If Mantle basically gives GCN-based GPU's a massive performance edge, and Nvidia cannot compete they will have to make their own low level API, which means Developers will have to make versions of their game for both, which is a huge headache for them. As it is we're looking at a near future where developers have to develop for OpenGL\DX and Mantle anyway.
 
Last edited:
Joined
Nov 9, 2010
Messages
5,691 (1.10/day)
System Name Space Station
Processor Intel 13700K
Motherboard ASRock Z790 PG Riptide
Cooling Arctic Liquid Freezer II 420
Memory Corsair Vengeance 6400 2x16GB @ CL34
Video Card(s) PNY RTX 4080
Storage SSDs - Nextorage 4TB, Samsung EVO 970 500GB, Plextor M5Pro 128GB, HDDs - WD Black 6TB, 2x 1TB
Display(s) LG C3 OLED 42"
Case Corsair 7000D Airflow
Audio Device(s) Yamaha RX-V371
Power Supply SeaSonic Vertex 1200w Gold
Mouse Razer Basilisk V3
Keyboard Bloody B840-LK
Software Windows 11 Pro 23H2
They push hardware 100%, whether or not the average user is at the point where they can run the game at that level is irrelevant. How many other developers do that with such consistancy?
Quite honestly most anyone can push hardware to 100%. It comes down to whether they CHOOSE to, because most know if you don't do that while optimizing well, you cater to mostly the high end vs mainstream gamer market.
You fail to point out the huge and stark difference between Glide and Mantle. Glide was born of ONE company trying to reinvent the API mouse trap, and solely designed and controlled by them.

Mantle was requested by numerous developers, which in itself substantiates the long overdo comment. Mantle was a collaborative effort and will continue to be. It's the closest thing we'll likely see to an open source API.
 
Joined
Nov 4, 2005
Messages
12,072 (1.72/day)
System Name Compy 386
Processor 7800X3D
Motherboard Asus
Cooling Air for now.....
Memory 64 GB DDR5 6400Mhz
Video Card(s) 7900XTX 310 Merc
Storage Samsung 990 2TB, 2 SP 2TB SSDs, 24TB Enterprise drives
Display(s) 55" Samsung 4K HDR
Audio Device(s) ATI HDMI
Mouse Logitech MX518
Keyboard Razer
Software A lot.
Benchmark Scores Its fast. Enough.
They push hardware 100%, whether or not the average user is at the point where they can run the game at that level is irrelevant. How many other developers do that with such consistancy?



Glide.

@Steevo

I understand most of that, but that doesn't change the fact that you're then asking way more of your GPU. You are no longer managing its tasks and are instead forcing it to manage its own tasks, as well as requiring it still talk to the OS continually (that much is unavoidable if you want to get it to play nice with your screen). If anything I think this is AMD's attempt to take CPU's out of the equation when it comes to gaming performance, they want everything to be GPU bound since they tend to compete infinitely better in the GPU market anyway.

It's hard to compare DirectX since as I said, there are almost 0 games that are built using DirecX 11 properly. Almost all games from the past 6-7 years were made as DX9 titles for the sake of consoles, then hastily ported to PC's either as is or poorly as DX10/11 ports. The few games that were built on DX11 run pretty damn well on it--Civ V sees a sizeable performance increase when using DX11 vs. DX9, as does World of Warcraft (not insanely demanding titles, but any performance increase is noteworthy) and at release BF3 was one of the first games to truly run DX11 and it looked and ran exceptionally (still does, although BF4 seems to run noticably worse for me).

We're also avoiding the possibility that AMD is being highly dishonest about how Mantle will work for other hardware vendors (Nvidia that is) in the same way that Nvidia is highly dishonest about how PhysX (which happens to be a low level API as well) works on AMD cards. If Mantle basically gives GCN-based GPU's a massive performance edge, and Nvidia cannot compete they will have to make their own low level API, which means Developers will have to make versions of their game for both, which is a huge headache for them. As it is we're looking at a near future where developers have to develop for OpenGL\DX and Mantle anyway.

First, the percentage of time shaders are running junk code is not currently known, as in Nvidia or AMD do not share how much of the time what work is being performed is being thrown out when the user turns left or right, and most benchmarks are not dynamic enough to determine each plausible outcome, thus causing the lag spikes users experience.

What Mantle is and does from what I understand is simply remove the extra length of the pipeline that causes the stalls, and the latency introduced by the CPU having to feed each call to the GPU, plus the ability to DMA (hardware so no extra GPU processing load, just like DMA northbridge reduced the CPU load back in the day) to remove the penalty for not having the textures in memory.

The thing is we aren't dealing with a driver level software, but a API, so as long as Nvidia implements the same basic hardware functions or has implemented them there is no reason they couldn't use it. Unlike the wondrous CUDA/Physx that works on how many GPU accelerated Steam 90% plus metacritic games? Oh right.....there is the idea that nvidia is like the big baby in the sandbox who doesn't want to share, and instead prefers to throw sand and has thus taught all the other babies to do the same.

Plus there are these developers....

And they kinda make all this go for us...........so even if AMD were selling shit to the public, the developers would risk their reputations on what? Its real, and happening, so get used to the idea.
 
Last edited by a moderator:
Joined
Mar 24, 2011
Messages
2,356 (0.47/day)
Location
VT
Processor Intel i7-10700k
Motherboard Gigabyte Aurorus Ultra z490
Cooling Corsair H100i RGB
Memory 32GB (4x8GB) Corsair Vengeance DDR4-3200MHz
Video Card(s) MSI Gaming Trio X 3070 LHR
Display(s) ASUS MG278Q / AOC G2590FX
Case Corsair X4000 iCue
Audio Device(s) Onboard
Power Supply Corsair RM650x 650W Fully Modular
Software Windows 10
Quite honestly most anyone can push hardware to 100%. It comes down to whether they CHOOSE to, because most know if you don't do that while optimizing well, you cater to mostly the high end vs mainstream gamer market.

What right do people get to complain about inefficiencies if they cannot be bothered to make optimizations? I never hear about Crytek or 4A Games complaining about Graphics API's. I've heard Carmack voice his dislike of DirectX but also crap all over OpenGL for not updating quickly enough. I like DICE's games, but they are far from perfect, and have really odd priorities if they think their time is best spent working with AMD on Mantle. How about changing Battlefield's hit detection to serverside and fixing the god damn net code so people stop killing each other at the same time ~50% of the time.

You fail to point out the huge and stark difference between Glide and Mantle. Glide was born of ONE company trying to reinvent the API mouse trap, and solely designed and controlled by them.

Mantle was requested by numerous developers, which in itself substantiates the long overdo comment. Mantle was a collaborative effort and will continue to be. It's the closest thing we'll likely see to an open source API.

Glide was supported by quite a few devs as well. Hell, looking at the list EA and Interplay appear to have taken quite a liking to Glide.

A bigger issue I see is that back then there was 3-4 GPU manufacturers (nVidia, ATi, 3dfx, Matrox?) battling it out and now there is a duopoly, and market share is split pretty evenly.

I would never call this an open source API, looking at the slides AMD says other vendors can get their hardware working with Mantle, but not that it will be easy or cheap, I almost guarantee they intend for Nvidia and Intel to pay them for such a luxury. I'm not saying this is a bad thing or unethical, if AMD develops the tech they have every right to license it out--just as Nvidia and Intel have every right to laugh at them and release their own Graphics API's which I have no doubt they would try (maybe not Intel, but Nvidia would sooner spend 5x as much on their own solution than pay AMD).
 
Joined
Feb 18, 2009
Messages
1,825 (0.31/day)
Location
Slovenia
System Name Multiple - Win7, Win10, Kubuntu
Processor Intel Core i7 3820 OC@ 4.0 GHz
Motherboard Asus P9X79
Cooling Noctua NH-L12
Memory Corsair Vengeance 32GB 1333MHz
Video Card(s) Sapphire ATI Radeon RX 480 8GB
Storage Samsung SSD: 970 EVO 1TB, 2x870 EVO 250GB,860 Evo 250GB,850 Evo 250GB, WD 4x1TB, 2x2TB, 4x4TB
Display(s) Asus PB328Q 32' 1440p@75hz
Case Cooler Master CM Storm Trooper
Power Supply Corsair HX750, HX550, Galaxy 520W
Mouse Multiple, Razer Mamba Elite, Logitech M500
Keyboard Multiple - Lenovo, HP, Dell, Logitech
They push hardware 100%, whether or not the average user is at the point where they can run the game at that level is irrelevant. How many other developers do that with such consistancy?

That totally doesn't mean all that utilization is used. A considerable amount is only heat.




What is your point? Almost every other Mantle attacker keeps whining about glide, I don't care really, now is now, I wasn't around at the time, the company went bankrupt and the API was proprietary. I really don't see the point of discussing this, I think it's just a distraction and a very crappy argument.
 
Joined
Mar 24, 2011
Messages
2,356 (0.47/day)
Location
VT
Processor Intel i7-10700k
Motherboard Gigabyte Aurorus Ultra z490
Cooling Corsair H100i RGB
Memory 32GB (4x8GB) Corsair Vengeance DDR4-3200MHz
Video Card(s) MSI Gaming Trio X 3070 LHR
Display(s) ASUS MG278Q / AOC G2590FX
Case Corsair X4000 iCue
Audio Device(s) Onboard
Power Supply Corsair RM650x 650W Fully Modular
Software Windows 10
That totally doesn't mean all that utilization is used. A considerable amount is only heat.

And? Heat and actual performance are not mutually exclusive and have almost no connection--outside of the fact that without keeping mdoern GPU's adequately cooled they tend to throttle. Heat is generated when current runs through circuits, unless Mantle can physically improve on the hardware (spoiler: it can't) that aspect is completely unavoidable.

What is your point? Almost every other Mantle attacker keeps whining about glide, I don't care really, now is now, I wasn't around at the time, the company went bankrupt and the API was proprietary. I really don't see the point of discussing this, I think it's just a distraction and a very crappy argument.

Because Glide (and PhysX to some extent) was a near identical concept. A low level API developed by a hardware vendor that offered large performance gains on their own hardware. Yes Yes I know the slides say other vendors can take advantage of Mantle, but we don't know any specifics. I guarantee if AMD created such a golden goose they wouldn't just give it away, they are not that benevolent...
 
Joined
Feb 18, 2009
Messages
1,825 (0.31/day)
Location
Slovenia
System Name Multiple - Win7, Win10, Kubuntu
Processor Intel Core i7 3820 OC@ 4.0 GHz
Motherboard Asus P9X79
Cooling Noctua NH-L12
Memory Corsair Vengeance 32GB 1333MHz
Video Card(s) Sapphire ATI Radeon RX 480 8GB
Storage Samsung SSD: 970 EVO 1TB, 2x870 EVO 250GB,860 Evo 250GB,850 Evo 250GB, WD 4x1TB, 2x2TB, 4x4TB
Display(s) Asus PB328Q 32' 1440p@75hz
Case Cooler Master CM Storm Trooper
Power Supply Corsair HX750, HX550, Galaxy 520W
Mouse Multiple, Razer Mamba Elite, Logitech M500
Keyboard Multiple - Lenovo, HP, Dell, Logitech
I never hear about Crytek or 4A Games complaining about Graphics API's.

That's absolutely nothing. 90% of what they internally think is not public. And i know about the Crytek's quote, if they can't see benefits, that's their problem, nobody's going to rely on the stallers, that's why I fully support and SALUTE Johan and all the the others for putting down the hammer and make it happen, except just "oh we on PC we
It is totally evident that Carmacks is more personally interested in Oculus Rift than he is for the PC market, it's totally evidence they haven't done anything to support PC, except ofcourse Rift, and all the comments about PC not being the leading platform, yeah that's all a financial perspective, I don't care about financial crap and their long term corporate shit, that's their problem, from a consumer viewpoint, if they put effort and make great games people will buy it whether it's on PC or Wii for that matter (tons of X360 and PS3 people buying wii just for SSBB)

I like DICE's games, but they are far from perfect, and have really odd priorities if they think their time is best spent working with AMD on Mantle. How about changing Battlefield's hit detection to serverside and fixing the god damn net code so people stop killing each other at the same time ~50% of the time.


Stop mixing things, I've pointed this out, but those are 2 different discussions, graphics programmers don't write netcode and balance. You're making a fool of your self by blaming like that there's only 5 programmers being responsible for every single piece of code in the game.

I've said it my self how much I hate BF for all the bugs and weird geometry-animation just absolute weirdness, that discussion has nothing to do with Mantle, I analyze critically everything, I am not a bozo on the street judging everything about a person based on a first impression, that's what the mantle attackers do, that's what the indoctrinated stupid college trendy developers do, that's what twitter junkines do, that's what blogger yappies do, I don't and I can't connect these 2 things, it has no sense at all, yes I can kick Johan in the ass for all the buggy engine, but I can give him a medal for making Mantle, but what really does it matter the most, one game that EA forced them to do, or the WHOLE damn industry finally moving away from the awful mainstream DX and OGL failtrain. ("mainstream" because it caters to basically everyone, from indue games to AAA games)

All these mainstream developers mingling around the chats and the web (not just the mentioned, all the webz, i have 20 tabs of forums opened), they are all compartmentalized, they focus on their little bubble, they don't see the big picture and just outright assume everything. It makes me sick, and I've just woken up from a yesterdays 4-Freaking-hour of IRC chats is a OGL Chat Room, eventually they learned something, while I liked to chat and I wasn't exactly arguing, it still was hammering on the head. One of the big things is communication, they didn't seem to distinguish from my chatter in stating options (what could happen) to facts, to my own speculation. And I didn't speculate much except the numbers, 30% initially, and 50% for CPU, that's my own speculation, they keept hanging on to it like it was something that can discredit me or what, if I say upfront it is the speculation, it cannot be a point of argument to counter on all the other things I said which definitely aren't speculation.

I didn't mean the whole idea of discussion made me sick, no, I ofcourse expected that, the sick part is, when someone came out to say absolute asinine stuff like "GPU Vendors do driver hacks because they can't get developers to optimize their games" .... driver hacks meaning driver updates or "performance optimizations", yes they're all driver hacks, theres no real genuine optimizations in DX, it never was, it's all duct-tape solutions.

And by that point I was banding on my head, it's the other ***** way around! It's the DEVELOPERS who are bugging the GPU Vendors to Fix their games but GPU Vendors can only do a mediocre job by hacking the driver .... telling the GPU what to do in a situation by manually flushing buffer for example among all other stuff, because the game developer cannot do that because DirectX and OpenGL don't allow it. And GPU Vendors have to do that for alllllll thooooose games out there, and that's when we get to the realization that most of the code in the drivers is app-specific code, that's why they're so big, thousands of games, thousands of combinations for the GPU families, Operating Systems, not to mention other GFX programs such as photoshop, and movie players, all this code is NOT proper optimization, it's a hack if we get to the bottom of it, it never was genuine fix. They kept saying like this was normal, it always was driver hacking since forever on DX and OGL, they're high-level APIs, and they keep saying if they're going to hammer on it long enough "oh we could implement everything mantle does in OGL too" ... yeah tell that to Khronos, nobody has 20 years to wait for that.
 
Last edited:
Joined
Mar 24, 2011
Messages
2,356 (0.47/day)
Location
VT
Processor Intel i7-10700k
Motherboard Gigabyte Aurorus Ultra z490
Cooling Corsair H100i RGB
Memory 32GB (4x8GB) Corsair Vengeance DDR4-3200MHz
Video Card(s) MSI Gaming Trio X 3070 LHR
Display(s) ASUS MG278Q / AOC G2590FX
Case Corsair X4000 iCue
Audio Device(s) Onboard
Power Supply Corsair RM650x 650W Fully Modular
Software Windows 10
You're making a fool of your self

I'm not the one claiming a Graphics API can help cool your overheating computer.

I've said it my self how much I hate BF for all the bugs and weird geometry-animation just absolute weirdness, that discussion has nothing to do with Mantle, I analyze critically everything, I am not a bozo on the street judging everything about a person based on a first impression, that's what the mantle attackers do, that's what the indoctrinated stupid college trendy developers do, that's what twitter junkines do, that's what blogger yappies do, I don't and I can't connect these 2 things, it has no sense at all, yes I can kick Johan in the ass for all the buggy engine, but I can give him a medal for making Mantle, but what really does it matter the most, one game that EA forced them to do, or the WHOLE damn industry finally moving away from the awful mainstream DX and OGL failtrain.

Sooooo DICE makes shit games, but since they helped AMD make Mantle they deserve medals, so they can make efficient shit games? :wtf:

I also wouldn't even put OpenGL and DirectX in the same league. OpenGL is used by a very small portion of game developers where as DirectX is essentially the defacto Graphics API. Even then almost nobody uses DirectX 11, which allows for many performance gains assuming you batch calls which almost nobody does because they'd rather hack a DX11 version together to say they have one.

I didn't mean the whole idea of discussion made me sick, no, I ofcourse expected that, the sick part is, when someone came out to say absolute asinine stuff like "GPU Vendors do driver hacks because they can't get developers to optimize their games" .... driver hacks meaning driver updates or "performance optimizations", yes they're all driver hacks, theres no real genuine optimizations in DX, it never was, it's all duct-tape solutions.

Curious that you mention driver hacks specifically and also mention RAGE was a well developed game and all the problems were OpenGL's fault and not iD Software or AMD's fault--yet only AMD cards had issues with the game, it ran fine on Nvidia from what I recall.
 
Last edited:
Joined
Feb 24, 2009
Messages
3,516 (0.60/day)
System Name Money Hole
Processor Core i7 970
Motherboard Asus P6T6 WS Revolution
Cooling Noctua UH-D14
Memory 2133Mhz 12GB (3x4GB) Mushkin 998991
Video Card(s) Sapphire Tri-X OC R9 290X
Storage Samsung 1TB 850 Evo
Display(s) 3x Acer KG240A 144hz
Case CM HAF 932
Audio Device(s) ADI (onboard)
Power Supply Enermax Revolution 85+ 1050w
Mouse Logitech G602
Keyboard Logitech G710+
Software Windows 10 Professional x64
Because Glide (and PhysX to some extent) was a near identical concept. A low level API developed by a hardware vendor that offered large performance gains on their own hardware. Yes Yes I know the slides say other vendors can take advantage of Mantle, but we don't know any specifics. I guarantee if AMD created such a golden goose they wouldn't just give it away, they are not that benevolent...

Glide came about because there was no Windows API. DX came about because Glide was a proprietary API and there was need for a better open source API other then OpenGL. In fact, when 3Dfx released their first Voodoo card, it supported not only Glide, but also OpenGL.

There was also more then the 4 companies you listed earlier. It was somewhere around 8 with 3Dfx being the dominating one. S3, ATi, nVidia, 3Dfx, Matrox, and a couple others I can't think of off the top of my head.

Glide failed because there were 8 GPU companies and there was an extreme need for an open graphics API then the lack luster support that was received for OpenGL. Also thanks to Glide, it helped push MS to make DX better as Glide supported things long before DX eventually supported them.

If my memory serves me well, it was thanks to 3Dfx and Glide that helped push MS to get DX to version 9 that everyone still thinks was the biggest performance and visual jump from the previous one.

To compare Mantle to Glide, I think, is not only wrong, but also is trying to compare the two APIs in totally different environments that the two APIs addressed two different problems. MS does not seem to be too interested in pushing DX forward too much more, thus why we see the push for Mantle as there's far less of a need for a universal API and need for one that is less hindering to the computational power that we find in today's computers.
 
Joined
Mar 24, 2011
Messages
2,356 (0.47/day)
Location
VT
Processor Intel i7-10700k
Motherboard Gigabyte Aurorus Ultra z490
Cooling Corsair H100i RGB
Memory 32GB (4x8GB) Corsair Vengeance DDR4-3200MHz
Video Card(s) MSI Gaming Trio X 3070 LHR
Display(s) ASUS MG278Q / AOC G2590FX
Case Corsair X4000 iCue
Audio Device(s) Onboard
Power Supply Corsair RM650x 650W Fully Modular
Software Windows 10
Glide came about because there was no Windows API. DX came about because Glide was a proprietary API and there was need for a better open source API other then OpenGL. In fact, when 3Dfx released their first Voodoo card, it supported not only Glide, but also OpenGL.

Correct. Glide was a derivation so to speak of OpenGL, the GL in GLide was because it was heavily based on OpenGL and they wanted to stress that.

There was also more then the 4 companies you listed earlier. It was somewhere around 8 with 3Dfx being the dominating one. S3, ATi, nVidia, 3Dfx, Matrox, and a couple others I can't think of off the top of my head.

I had forgotten S3, but other than them any others weren't worth mentioning, it was mostly nVidia, ATi, 3Dfx, S3, and Matrox--with the first 3 being the biggest players.

Glide failed because there were 8 GPU companies and there was an extreme need for an open graphics API then the lack luster support that was received for OpenGL. Also thanks to Glide, it helped push MS to make DX better as Glide supported things long before DX eventually supported them.

I think AMD is trying to do a similar thing, but the bigger problem is that 7-8 years of PC-esque consoles dragging along the corpse of DirectX 9 has lead to abysmal adoption of DirectX 10/11 which is much more streamlined--I've read DirectX 11 is easier to use than any previous version as well as OpenGL, so there's that.

To compare Mantle to Glide, I think, is not only wrong, but also is trying to compare the two APIs in totally different environments that the two APIs addressed two different problems. MS does not seem to be too interested in pushing DX forward too much more, thus why we see the push for Mantle as there's far less of a need for a universal API and need for one that is less hindering to the computational power that we find in today's computers.

I'm not saying it's a perfect comparison, but it's the closest comparison we have. Also, is nobody going to point out the irony of AMD pushing OpenCL and then releasing Mantle??? They went from bragging about their high-level performance to releasing a low-level API in like a year flat.
 
Joined
Feb 18, 2009
Messages
1,825 (0.31/day)
Location
Slovenia
System Name Multiple - Win7, Win10, Kubuntu
Processor Intel Core i7 3820 OC@ 4.0 GHz
Motherboard Asus P9X79
Cooling Noctua NH-L12
Memory Corsair Vengeance 32GB 1333MHz
Video Card(s) Sapphire ATI Radeon RX 480 8GB
Storage Samsung SSD: 970 EVO 1TB, 2x870 EVO 250GB,860 Evo 250GB,850 Evo 250GB, WD 4x1TB, 2x2TB, 4x4TB
Display(s) Asus PB328Q 32' 1440p@75hz
Case Cooler Master CM Storm Trooper
Power Supply Corsair HX750, HX550, Galaxy 520W
Mouse Multiple, Razer Mamba Elite, Logitech M500
Keyboard Multiple - Lenovo, HP, Dell, Logitech
I'm not the one claiming a Graphics API can help cool your overheating computer.

I'm not even going to respond to BS. That guys problem with physical silicon limitation is fairly unlikely to be a problem for PC enthusiasts, throw the default cooler off, 1. put a better one on, 2. put watercooling on, problem solved. No more discussion needed. If it's a problem for "teh mainstream", they will have to join the PC enthusiasts crowd's ways, at least temporairly until the GPU vendor replaces the fans with better default ones; but , no handholding, no spoonfeeding, no babysitting, that's what Mantle is for, people who are sick of being limited by the petty mainstreamers, trying to hold us back.

Am I coming down too hard, maybe, I'm telling you the truth, someone has to do it, and definitely I'm not trying to attack anyone with this, just pointing out these compartmentalized groups, how they behave and all the background, but that's not important for Mantle, they won't change anything, they can just assume the moral high-ground until the numbers get out, and again, don't come back on me because I'm saying again, It will definitely not be a 100% jump in performance on day one, but in time, this is not such an unrealistic value, BUT, with time, this number will be murky, because it will be gradual, and we all know what that means, it's not as noticable, and certainly the compartmentalized groups aren't going to analyze the numbers to caluclate the total boost in it's entirety unless we have a controlled benchmark and those benchmarks don't represent all the other engines either.

Remember, I'm on the defensive side, I don't come out attacking the mainstream, nobody's taking your DX and OGL away from you, AMD has made that perfectly clear. I'm taking this fight dead serious, I've waited for this since 4 years ago when first started to slowly became aware of all the problems of PC APIs, look if it fails, it fails, there's no buts of ifs, I might be taking this attitude part personal, but all these explanations I am doing my best to stay objective and scientific, I'm not perfect so my stuff sounds a bit emotional, but all I'm doing is laying it out the differences and if I'm wrong I admit it and correct it; plus what's the worst that can happen, some bozo developer on MountainDew and Doritos comes from college and tries to build a Mantle game and fails, and blames AMD, and everyone in the industry takes him seriously and jumps ship?

And some people keep thinking that this whole Mantle thing is some kind of AMD's massive PR, the announcements being made just before the GPUs ship might be strategic, but I don't see any BS spin on it considerig 4 other developers are involved. It's not like they were showing puppies.

Sooooo DICE makes shit games, but since they helped AMD make Mantle they deserve medals, so they can make efficient shit games? :wtf:

I also wouldn't even put OpenGL and DirectX in the same league. OpenGL is used by a very small portion of game developers where as DirectX is essentially the defacto Graphics API. Even then almost nobody uses DirectX 11, which allows for many performance gains assuming you batch calls which almost nobody does because they'd rather hack a DX11 version together to say they have one.

With Mantle, it takes a bit longer to build the rendering pipeline, but it's a fixed cost, they don't have to maintain the codebase, they can keep improving it, and they have more time to worry about other parts of the game, it will shift the work away from all the effort going into making sure the game runs on PC, all the constant talk between GPU vendors.



Curious that you mention driver hacks specifically and also mention RAGE was a well developed game and all the problems were OpenGL's fault and not iD Software or AMD's fault--yet only AMD cards had issues with the game, it ran fine on Nvidia from what I recall.

Rage codebase was superclean and stable of almost every other game, the drivers were the only reason the game didn't work.

1. both drivers are hacks, "peformance" and "working" isn't the same thing so this is the big thing I want to point out, if it's working, it doesn't mean the optimizations are proper, it's still a ton of overhead, even if Nvidia's GPU driver hacks are better than AMDs, they're still hacks, kapish.
2. AMD Released the wrong beta driver mistakenly, the package contained the DLL file which was 3 weeks older than it should be
3. Proper AMD OGL drivers weren't as good as Nvidia's, not just app-specific, but AMD's driver had core support problems
4. Rage was ahead in complexity of any other OGL application.

Sorry for double post (i bet on the idea you would reply by now :p)

---------------------------------------
----------------------------------------


Should this be the smoking gun of all quotes from the Q&A:

When asked "what's the benefit to consumers": AMD Driver Guy Responded, but he then said he gave word to devs who can explain better:

"Increased performance means two things, right? First, it means you can run faster, naturally, on decent hardware, the other way to look at it is that if I don't need to run faster, can I write on a lower end hardware? If I write on a lower end hardware, how does it expand my user base? So suddenly everybody with a small form factor not so powerful notebook can run all these games, right? Extremely expanded user base. So that's one way to look at it, right? The other way to look at it is performance is nice, but to me this in only a stepping stone, because if you're looking at 20% improvement, 2x, 3x, this is purely a performance advantage, when you think about 10x or more, the question you should start asking yourself, what is the new concept I can put on top of this, what new types of games ..."
Exact time: http://youtu.be/sSY2KXBoro0?t=26m45s


Also good to point out:
I'm not quoting exactly but this is what the AMD guy also said, he in his 30 Years of being at ATI and AMD for GPUs, said that no other API was developed with direct game developer contact, always isolated.
 
Last edited:
Joined
Mar 29, 2012
Messages
414 (0.09/day)
Location
brčko dc/bosnia and herzegovina
System Name windows 10 pro 64bit
Processor i5 6600k 4.4ghz 1.25v
Motherboard asus maximus viii gene
Cooling BeQuiet Dark Rock pro
Memory 2x8(16)GB 2860mhz
Video Card(s) gtx 1070 EVGA
Storage ssd x2 128gb raid0/ ssd480gb
Display(s) AOC 1440p 75hz
Case Aerocool DS Cube
Audio Device(s) asus motherboard intergrated
Power Supply be Quiet pure power L8 600w
Mouse Corsair Ironclaw wireles
Keyboard Logitec G213
Software my favorite World of Tanks :) is that a software?? :)
lot of good reads about mantle,i also hope they finaly make something that is groundbreaking
but like one guy from my country use to say "since i cought my self in a lie...,i dont trust anyone any more.."
until i see it with my own eyes i can only hope!!
im not pesimist but stop talking to much,do that stuf!!
 
Joined
Feb 18, 2009
Messages
1,825 (0.31/day)
Location
Slovenia
System Name Multiple - Win7, Win10, Kubuntu
Processor Intel Core i7 3820 OC@ 4.0 GHz
Motherboard Asus P9X79
Cooling Noctua NH-L12
Memory Corsair Vengeance 32GB 1333MHz
Video Card(s) Sapphire ATI Radeon RX 480 8GB
Storage Samsung SSD: 970 EVO 1TB, 2x870 EVO 250GB,860 Evo 250GB,850 Evo 250GB, WD 4x1TB, 2x2TB, 4x4TB
Display(s) Asus PB328Q 32' 1440p@75hz
Case Cooler Master CM Storm Trooper
Power Supply Corsair HX750, HX550, Galaxy 520W
Mouse Multiple, Razer Mamba Elite, Logitech M500
Keyboard Multiple - Lenovo, HP, Dell, Logitech
Top