I think the problem is that people aren't considering what the percentages are relative to.
Lets consider the first item, Far Cry Primal:
RX 480: 58.1
GTX 1060: 64.7
111.3957 was the result
@W1zzard got.
If we're going to compare the GTX 1060
against the RX 480, you need to take the FPS difference between the two
compared to the GTX 1060.
That means, the difference would be (GTX FPS - RX FPS), then you would divide by the GTX frame rate to get a percentage based on the GTX's result.
(64.7 - 58.1) / 64.7 = 0.102
--- Or 10.2%, or as it's being reported on the graph, 110.2%, since 10.2% represents the gain or loss relative to the GTX 1060.
If we're going to compare the RX 480
against the GTX 1060, you essentially swap the GTX and RX values to be
compared to the RX 480.
That means that the difference for this car would be (RX FPS - GTX FPS), then like above, you would divide the frame rate the value you're comparing
against, which would be the RX.
(58.1 - 64.7) / 58.1 = -0.1136
--- Or -11.36% or, as being reported on the graph 88.64% (100% + (-11.36%) which represents the gain or loss relative to the RX 480.
The problem with W1zz's logic is that he's basically saying, "Let's get the difference between the two relative to the RX 480 but, merely flipping the sign on the percent difference to reflect a gain for the GTX". While this does show a difference, it's still a difference relative to whatever the initial calculation was before, which means that 11.36% gain is relative to the 58.1 FPS achieved on the RX 480
NOT the 64.7 FPS achieved by the GTX 1060.
What people need to realize is that the GTX 1060 (in the case of Far Cry Primal,) might have been 11.36% faster than the RX 480 but, that also means that the RX 480 is 10.2% slower than the GTX 1060 because the values used in the divisor changes.
This works well when strictly comparing two cards but, it doesn't work so well when you're trying to compare several different cards because the divisor is constantly changing to match the card it is being compared against so every value only means something between those two cards, so the comparison between two sets of different cards (even if one card is the same,) doesn't mean a whole lot.
I would argue that something based off of 60 FPS (the baseline for smooth gameplay,) would yield better results because the scale is consistent for all cards that may be compared, where 100% (or 0% gain/loss) would indicate an average of 60 FPS is maintained, >100% means that over 60 FPS was maintained and < 100% means under 60 FPS was maintained. This has the virtue of the divisor being a constant between all cards, so comparisons between any set of cards will be consistent.
I think having more relative numbers and explaining exactly what they represent and how they were derived would make more sense. I would surmise that the average person doesn't realize that a GTX 1060 being X amount faster than a RX 480 does not mean that the RX 480 is X amount slower than the GTX 1060 because percentages change when what you're comparing against changes.
Either way, this is what I came up with. I think W1zz's numbers are correct and that it's more of people's understanding of them that's adding confusion.
View attachment 77186
Personally, I like the idea of calculating numbers relative to a constant like 60 FPS so it describes how much more playable a game is instead of how much faster a GPU is in comparison. What good are two GPUs if they both can't play a game smoothly?