Tuesday, November 17th 2020

Apple M1 Beats Intel "Willow Cove" in Cinebench R23 Single Core Test?

Maxon ported the its latest Cinebench R23 benchmark to the macOS "Big Sur" Apple M1 platform, and the performance results are groundbreaking. An Apple M1-powered MacBook Pro allegedly scored 1498 points in the single-core Cinebench R23 test, beating the 1382 points of the Core i7-1165G7 reference score as tested by Maxon. These scores were posted to Twitter by an M1 MacBook Pro owner who goes by "@mnloona48_" The M1 chip was clocked at 3.10 GHz for the test. The i7-1165G7 uses Intel's latest "Willow Cove" CPU cores. In the same test, the M1 scores 7508 points in the multi-core test. If these numbers hold up, we can begin to see why Apple chose to dump Intel's x86 machine architecture in favor of its own Arm-powered custom silicon, as the performance on offer holds up against the highest IPC mobile processors in the market.
Sources: mloona48_ (Twitter), via Hexus.net Forums
Add your own comment

96 Comments on Apple M1 Beats Intel "Willow Cove" in Cinebench R23 Single Core Test?

#76
TheoneandonlyMrK
phanbueyYou mean like putting 8 of these in a mac pro? Pretty sure that will be the plan. Single thread performance is already good, all they need is more cores:



:laugh: mini tower.
Well at least they would start having a legit reason for bonkers builds and absurd pricing, I doubt these chips are cheap, the rest of the BOM is popcorn pricing.
Posted on Reply
#77
Searing
Macbook Air base model seems to settle around 10.015 Watts at 2.650 Ghz while doing Cinebench R23 once. Then drops to around 8 Watts and 2.460 Ghz after a second round. Where is that guy saying 20+ watts, which should have been obviously ridiculous just by looking at the increases in battery life vs the Intel ones.

Posted on Reply
#78
Nordic
TheLostSwedeWhy would Apple ever go as high as 90W? We might see some 15-25W parts next, but I doubt we'll ever see anything in the 90W range from Apple.
I don't have an answer for that. The hardware geek in me just wants to see it happen. There are many things I want but will not have.
SelayaHonestly, I'd expect the same as would happen if you would feed a 5950X 600W - it'll go boom.
I don't mean this chip specifically. I would like to see what Apple could achieve if they tried.
Posted on Reply
#79
techisfun
I bought an M1 Mac mini today and returned it a few hours later. The hardware is revolutionary. Unfortunately, macOS is still a pile of crap. The M1 isn't going to help Apple grow their Mac user base. The OS needs a complete redesign. Big Sur is a coat of paint.

Here are some observations:

It only draws about 28w from the wall when the CPU is stressed (which is what my PC draws while idle.) It draws only 6w when streaming video.

The M1 doesn't render the desktop at 144fps smoothly. I had to set my refresh rate to 120hz, and there was still some stuttering when dragging windows.

The x86 versions of Firefox and Chrome failed to stream video from Twitch, but they were fast enough.

Safari is very fast. Jetstream 2: 202, Octane: 63K, Kraken: 450ms. But I don't like Safari.

Resuming is fast, but boot times are slow because macOS is bloated.
Posted on Reply
#80
dicobalt
Apple's 5nm 3GHz processor beats a 10nm 1.69GHz processor, that's not a victory in my book.
Posted on Reply
#81
Valantar
PunkenjoyWell you overthink this. CPU architecture are all about compromise and design choice. High count arm cpu were made to be in datacenter and have as many core for hyperscaller. Not necessarely to compete on single thread performance. Design choice have been made to have more core even if that imply less single core performance.

Apple with their M1 and other Apple CPU have a different focus, they are looking for single core performance (but not much on multithread performance) as they think it's what deserve them the most. It do not means that a16 core M1 would 1. be doable commercially and 2. beat the 5950x.

It also do not means that AMD or Intel can't do better single core performance, but when you have a limited amount of power and a limited amount of transitors, it's all a matter of choice.

AMD and Intel use mostly the same architecture for laptop up to datacenter. Some focus on datacenter and right now, Apple designed it's M1 for customer devices and they made their design in consequence.

There is also the process node difference, i think that a AMD or Intel CPU on 5nm would face them way better than right now. The CPU instruction set right now it's more a religion or something to cheer for than a real thing when it come to end performance. Both ARM and x86 have a front end to decode instruction, both have backend execution units, both use SIMD.

In the end it's a matter on how well you use your transitor and what is your end goal. For Apple, it's integration + Single Thread performance. For AMD, it's flexibility (chiplets) and maximum performance).

For intel, well i am not sure even intel know right now but that is another subject...
Sorry, but you're quite mistaken here. Let's take Zen 3 as an example: increasing single thread performance was the explicit main goal of that design - which is a whole new architecture with every part of the core changed from Zen 2 - and it also did increase ST performance in a very impressive way compared to its predecessors. Yet still just barely beats the M1. AMD has a 105W TDP and ~144W total package power draw to work with. If there was more ST scaling to be found, they have all the headroom they need to exploit it. Yet their cores individually max out at ~20W. Why? Because the architecture doesn't scale past that, it either grows unstable or overheats. Of course Apple saves a lot on their baseline power from basing this on a mobile architecture and not having something power hungry like multiple IF links, PCIe controllers and external memory, which helps them gain a lot of baseline efficiency. But it's undeniable that the cores in the M1 are massively efficient and performant at the same time.

It's obvious that AMD could have made a wider core with tons of transistors and made a higher IPC, lower clocking design like this. But could they have done so at the same level of efficiency? AnandTech suggests no. Of course Apple has a major advantage here in being vertically integrated and as such not caring that much about SoC costs as long as they can preserve their margins. Neither AMD nor Intel can operate that way, pushing them towards smaller and more affordable core designs. But quite frankly, that isn't much of an argument against the M1 being a major achievement, it just shows that Apple's tactics are working. Too bad for us non-Apple users, really.

As for a 16-core M1 being doable? It would definitely be a gargantuan piece of silicon, likely comparable to the Xbox Series X SoC in area, though of course on 5nm and not 7. I don't see Apple having a problem with that, given that it would be - at the low end - for >$2000 laptops and desktops (with very cut down chips at that price, allowing for salvaging a lot of faulty chips), scaling to well above $5000 for top configurations. The margins are more than there to pay for a big chip. As for performance scaling: they'll of course need to change their memory architecture and design an interconnect that works for that many cores. But that isn't that hard. In terms of pure performance, if the M1 nearly matches the 5950X at <1/4 the per-core power, there's little reason why a bigger chip wouldn't keep that performance at a minimum. Heat density will definitely be an issue, but one that can be solved by spreading core clusters out across the SoC or adding a vapor chamber.

As for AMD or Intel on 5nm being more competitive: well, obviously to some degree, but I wouldn't expect current TSMC 5nm to clock even close to as high as current TSMC 7nm, so that move might actually lose them performance unless it's also a wider architecture. Would it allow them to catch up in perf/W? Not even close. A single node change doesn't get you 75% power savings.
Smartcom5They recently patented technological approaches and algorithms to shift compute-threads in a heterogeneous environment (aka hybrid architectures) according to their needed instruction-set extensions – by arrange the threads across given big.LITTLE-cores, based on their instruction set fully autonomously in the silicon itself.

Thus, they patented a way to make scheduler-fixes needless in any heterogenous environment – they wouldn't've had done that if they don't plan to also make big.LITTLE designs, like for real.

Those official statements are just there to appease the competition and cosy Intel, nVidia, ARM et al. all along.
Just like how they lulled Intel into that (false) sense of security when they proclaimed their own official capitulation in '11 and how they would strike sail and that they henceforth, awed before Big Blue™, would content theirselves with just getting the fallen breadcrumbs – just to strike back even harder out of nowhere like they did with their Ryzen.

Seems I'm the only one having the firm believe, that AMD – under the condition that ARM/RISC-V reaches any greater significance and/or broader adaptation (read: market-saturation) – could rather spontaneously come up with some ARM-bases (RISC-V-) designs on their own pretty quick, likely even following such a hybrid nomenclature.

Since they never ever stopped their work on their K12 in the first place, it was just postponed indefinitely and put on hold in favour of what we now call Zen – and curiously enough, just a few months ago, AMD's K12 popped up again out of nowhere … Now remember who came to visit AMD to work on the AMD ARMv8-A-based K12-design …
K, jk!
That prospect, it's thrilling already, isn't it? ツ

Smartcom
Wasn't AMD's quite about not wanting hybrid architectures on the desktop, referring to Alder Lake? Hybrid for mobile makes perfect sense, and I don't doubt AMD could scale down Zen to a low power design for that use quite easily. That being said, that patent doesn't describe a method for entirely obviating the need for an architecture-aware scheduler; only allocating threads based on the instruction set only works for workloads where only one set of cores supports that instruction set, such as power hungry AVX loads. You'll still need the scheduler to know to move high performance threads to high performance cores even if they use instruction sets common to the two clusters.
InVasManiAMD and Intel could just do a 32-bit/64-bit bigLITTLE approach make one chip 32-bit only and the other 64-bit only.
Oh dear lord no. The majority of applications today are 64-bit. That would mean the "little" chip couldn't run them at all. Windows 10 is AFAIK 64-bit only, so it couldn't even run the OS! No. Just no.
BansakuSo the M1 scores 7508 points in the multi-core test. Wow, I get close to14,000 in R23 in Mac OS Big Sur with my 3700X Hackintosh. Let's hope Apple has something else up their sleeve as the iMacs and Mac Pros will be severely gimped in performance compared to equivalent X64 desktops.
You really shouldn't be surprised that a ~20-24W hybrid 4c (big) + 4c (little) CPU lags significantly behind an 8c16t all-big-core 65W (88W under all-core loads) CPU. What makes this impressive is that they're managing half your score with less than a third of the power, half the high performance cores, and no SMT.
SearingMacbook Air base model seems to settle around 10.015 Watts at 2.650 Ghz while doing Cinebench R23 once. Then drops to around 8 Watts and 2.460 Ghz after a second round. Where is that guy saying 20+ watts, which should have been obviously ridiculous just by looking at the increases in battery life vs the Intel ones.
20-24W is for the Mac Mini, not the Macbook Air.
Posted on Reply
#82
Vya Domus
SearingWhere is that guy saying 20+ watts, which should have been obviously ridiculous just by looking at the increases in battery life vs the Intel ones.
This confirms you are an arrogant fanboy, who despite knowing nothing at all, keeps taking jabs at me even though you have been proven wrong and this is beginning to look really pathetic on your end.

Anandtech reviewed the Mac Mini not the Macbook Air and he estimated 20W power draw from the SOC at full load.

For your sake I hope you are just a massive fanboy and are not actually this dumb. Regardless I am not sticking around to find out, off to the ignore list you go.
Posted on Reply
#83
dyonoctis
ValantarSorry, but you're quite mistaken here. Let's take Zen 3 as an example: increasing single thread performance was the explicit main goal of that design - which is a whole new architecture with every part of the core changed from Zen 2 - and it also did increase ST performance in a very impressive way compared to its predecessors. Yet still just barely beats the M1. AMD has a 105W TDP and ~144W total package power draw to work with. If there was more ST scaling to be found, they have all the headroom they need to exploit it. Yet their cores individually max out at ~20W. Why? Because the architecture doesn't scale past that, it either grows unstable or overheats. Of course Apple saves a lot on their baseline power from basing this on a mobile architecture and not having something power hungry like multiple IF links, PCIe controllers and external memory, which helps them gain a lot of baseline efficiency. But it's undeniable that the cores in the M1 are massively efficient and performant at the same time.

It's obvious that AMD could have made a wider core with tons of transistors and made a higher IPC, lower clocking design like this. But could they have done so at the same level of efficiency? AnandTech suggests no. Of course Apple has a major advantage here in being vertically integrated and as such not caring that much about SoC costs as long as they can preserve their margins. Neither AMD nor Intel can operate that way, pushing them towards smaller and more affordable core designs. But quite frankly, that isn't much of an argument against the M1 being a major achievement, it just shows that Apple's tactics are working. Too bad for us non-Apple users, really.

As for a 16-core M1 being doable? It would definitely be a gargantuan piece of silicon, likely comparable to the Xbox Series X SoC in area, though of course on 5nm and not 7. I don't see Apple having a problem with that, given that it would be - at the low end - for >$2000 laptops and desktops (with very cut down chips at that price, allowing for salvaging a lot of faulty chips), scaling to well above $5000 for top configurations. The margins are more than there to pay for a big chip. As for performance scaling: they'll of course need to change their memory architecture and design an interconnect that works for that many cores. But that isn't that hard. In terms of pure performance, if the M1 nearly matches the 5950X at <1/4 the per-core power, there's little reason why a bigger chip wouldn't keep that performance at a minimum. Heat density will definitely be an issue, but one that can be solved by spreading core clusters out across the SoC or adding a vapor chamber.

As for AMD or Intel on 5nm being more competitive: well, obviously to some degree, but I wouldn't expect current TSMC 5nm to clock even close to as high as current TSMC 7nm, so that move might actually lose them performance unless it's also a wider architecture. Would it allow them to catch up in perf/W? Not even close. A single node change doesn't get you 75% power savings.

Wasn't AMD's quite about not wanting hybrid architectures on the desktop, referring to Alder Lake? Hybrid for mobile makes perfect sense, and I don't doubt AMD could scale down Zen to a low power design for that use quite easily. That being said, that patent doesn't describe a method for entirely obviating the need for an architecture-aware scheduler; only allocating threads based on the instruction set only works for workloads where only one set of cores supports that instruction set, such as power hungry AVX loads. You'll still need the scheduler to know to move high performance threads to high performance cores even if they use instruction sets common to the two clusters.

Oh dear lord no. The majority of applications today are 64-bit. That would mean the "little" chip couldn't run them at all. Windows 10 is AFAIK 64-bit only, so it couldn't even run the OS! No. Just no.

You really shouldn't be surprised that a ~20-24W hybrid 4c (big) + 4c (little) CPU lags significantly behind an 8c16t all-big-core 65W (88W under all-core loads) CPU. What makes this impressive is that they're managing half your score with less than a third of the power, half the high performance cores, and no SMT.

20-24W is for the Mac Mini, not the Macbook Air.
Iirc, that's also the same reason as to why qualcomm isn't expected to ever come close to the A chips, because they can't afford to make a chip that's so expensive.

The 15w zen2 are not too shabby for an x86, the single core perf didn't suffer that much. we have yet to see if low power zen 3 can do the same.
Posted on Reply
#84
Valantar
dyonoctisIirc, that's also the same reason as to why qualcomm isn't expected to ever come close to the A chips, because they can't afford to make a chip that's so expensive.

The 15w zen2 are not too shabby for an x86, the single core perf didn't suffer that much. we have yet to see if low power zen 3 can do the same.
You're right, the difference in ST perf is very small - as I said, even a 5950X barely exceeds 20W in 1-core power (which drops to 6-7W at ~3.8GHz at a full 16c load), so we should expect similar ST perf (at least in 25W configurations) for mobile Zen 3 too. Still, the relatively low ST power is a clear indication that there isn't anyting "left in the tank" in terms of ST performance for Zen 3 - to increase that they need either much higher clocks (not feasible without exotic cooling and tons of power) or a more fundamental reworking of the architecture for IPC increases. They have a 144W power limit to work with after all, so if pushing more power into a single core made a difference, they would obviously be doing it. If a full, top-to-bottom redesign, changing every single part of the core from Zen 2 netted them a ~19% increase, they're going to need drastic changes to come close to the IPC Apple is demonstrating here. Of course it's entirely possible to compensate for IPC by increasing clocks - which is what AMD and Intel are both doing when compared to the M1 - and each approach has clear limitations. It's highly unlikely Apple can scale this design past 4GHz, and even that might require a lot of power. But nonetheless, the competitive path forward for AMD and Intel both became much more difficult with the launch of this.

You're probably right about QC not daring to make anything that expensive though. IIRC the cost of a high end smartphone chip is typically in the $50 range (or at least it was a few years back). Even doubling that would likely be very close to the pure silicon (no processing, packaging, binning, etc.) cost of the M1. Which would leave QC taking a net loss on every part they sell. And given how slim margins are in the mobile industry (for everyone but Apple, that is), going higher in chip prices for a non-integrated company is likely not feasible. Though mixed designs with 1-2 X1 cores might serve as a good middle ground.
Posted on Reply
#85
Searing
Vya DomusThis confirms you are an arrogant fanboy, who despite knowing nothing at all, keeps taking jabs at me even though you have been proven wrong and this is beginning to look really pathetic on your end.

Anandtech reviewed the Mac Mini not the Macbook Air and he estimated 20W power draw from the SOC at full load.

For your sake I hope you are just a massive fanboy and are not actually this dumb. Regardless I am not sticking around to find out, off to the ignore list you go.
And he didn't show the terminal power draw. I did. I'll show you the mini results later. Keep up the lies and FUD. It sure makes sense that Apple has the longest battery life while you pretend the M1 uses more power than Intel. /s
Posted on Reply
#86
Valantar
SearingAnd he didn't show the terminal power draw. I did. I'll show you the mini results later. Keep up the lies and FUD. It sure makes sense that Apple has the longest battery life while you pretend the M1 uses more power than Intel. /s
The MBA you showed data from is a passively cooled laptop. The MM AT tested is an actively cooled desktop with a higher clock speed spec. It stands to reason that the latter consumes more power. More than Intel, though? Not really, given that Intel has stopped specifying TDP beyond a range, which is 10-25W for their latest mobile parts. And 25W configurations are quite common in premium thin-and-lights like the XPS 13 - but those are again actively cooled. 10W Intel configurations are passively cooled too, but cut their base clocks a lot compared to 15 or 25W configurations.
Posted on Reply
#87
seth1911
It would be faster maybe with MAC OSX and the Cinebench for this OS, but it can`t run any Windows or Linux Distri.

Its a Consumer thing, wanna a Golden Cage:
Yes
No

If u buy Software in the Appstore for a few 100 or 1000$ u can`t switch back to Windows or Linux or u need to buy the Software again.
In a few Years Apple can set the Prices higher and higher cause u can`t go out from the Apple Economic. :toast:
Posted on Reply
#88
Nygma
seth1911It would be faster maybe with MAC OSX and the Cinebench for this OS, but it can`t run any Windows or Linux Distri.

Its a Consumer thing, wanna a Golden Cage:
Yes
No

If u buy Software in the Appstore for a few 100 or 1000$ u can`t switch back to Windows or Linux or u need to buy the Software again.
In a few Years Apple can set the Prices higher and higher cause u can`t go out from the Apple Economic. :toast:
Runs Linux just fine.
Posted on Reply
#89
dragontamer5788
ValantarNope, not the only one noticing that, and there's no doubt that these chips are really expensive compared to the competition. Sizeable silicon, high transistor counts, and a very expensive node should make for quite a high silicon cost. There's another factor to the equation though: Intel (not really, but not that far behind) and AMD are delivering the same performance with much smaller cores and on a bigger node but with several times the power consumption. That's a notable difference. Of course comparing a ~25W mobile chip to a 105W desktop chip is an unfair metric of efficiency, but even if Renoir is really efficient for X86, this still beats it.
This M1 core is basically twice the size of an x86 core, be it Intel Skylake or AMD Zen. Twice the reorder-buffer, twice the L1 cache, twice the execution ports, twice the decode width. 8-instructions per clock tick vs 4-instructions.

Apple then downclocked the design to 2.8GHz. There's no mystery here: Apple designed the widest core in all of computing history (erm... maybe 2nd widest. The Power9 8-SMT was 12 instructions / clock wide. But this M1 is the widest chip ever on the consumer market).

The "secret" to silicon power consumption is quite simple. Power consumption = O(voltage ^ 2 * frequency), and guess what? Frequency is related to voltage, so that's really O(voltage^3). Smaller transistors can operate at lower voltage (5nm advantage). Lower frequency means lower voltage, and then lower voltage means far less power. By making a wide (but low-frequency) core, Apple beats everyone in single-threaded performance and power-efficiency simultaneously.

Same thing with AVX512 (though different). Intel is willing to downclock their processor to perform 16 x 32-bit -operations simultaneously, because low-frequency / low-voltage but lots of work just scales better.

Anyway, you downclock and lower the voltage for power-efficiency. Then increase the width of the core to compensate for speed (and make things faster). Your processor needs to search harder for instruction-level parallelism, but that's a well known trick at this point. (Aka: out of order execution).

That's all there is to it. Everything after that is hypotheticals about the future. Can Intel / AMD make an 8-wide decoder? Or would the x86's variable-length instructions make such parallelism hard to accomplish?

-------------

Vya Domus has a point: traditionally, you'd just make a 2nd core instead of doubling the core size. (or, you go wide SIMD, which does scale well). What Apple is doing here, is betting that customers do want that faster single-core performance instead of shuffling data to a 2nd or 3rd core.

I agree with Apple however. I think the typical consumer would prefer a slightly faster (and slightly more efficient) single-core, rather than more cores. If given the choice between 8 cores (all 8-wide decode/execute), or 16-cores (of 4-wide decode/execute), the typical consumer probably wants the 8-core x 8-wide execution. Note: 8-wide execution is NOT 2x faster than 4-wide execution.
Posted on Reply
#90
Aquinus
Resident Wat-man
NygmaRuns Linux just fine.
I was unaware that Linux had support for the M1 chip. :wtf: I'm going to cry foul on that one. I haven't heard of anyone having cracked the bootloader to run Linux yet on these new machines.
Posted on Reply
#91
Valantar
AquinusI was unaware that Linux had support for the M1 chip. :wtf: I'm going to cry foul on that one. I haven't heard of anyone having cracked the bootloader to run Linux yet on these new machines.
I guess it theoretically does, but there's no way on earth Apple is letting anyone boot an M1-equipped machine into Linux.
Posted on Reply
#92
Aquinus
Resident Wat-man
ValantarI guess it theoretically does, but there's no way on earth Apple is letting anyone boot an M1-equipped machine into Linux.
I've read that it's already being worked on. I honestly don't think it's impossible, although there is no way that it's going to be as clean of an experience as just using OS X.
Posted on Reply
#93
Steevo
AquinusI think people are forgetting that this 10w chip is competing with chips that have TDPs as high as 35-45 watts. Come on, let's gain a little bit of perspective here. It's not the best, but it's pretty damn good for what it is. If this is what Apple can do with a 10w power budget, imagine what they can do with 45 watts.
So 20W of fixed function (the essential definition of RISC ARM architecture) hardware on a 5nm node. It's impressive, but not ground breaking. If we go by the rule of thumb we are seeing with TSMCs node shrinks we would see it falls in line with good design on a good process, thermals possibly being the largest limiting factor.

And your other comment about power density, yes, the W/Sqcm is an issue for low power designed devices, which is why 7nm TSMC can run 5Ghz on 1.5v or just 2.5Ghz on 1v it's about the design, putting space between the hottest parts so they can flux heat into other cooler areas without letting the magic smoke out.
Posted on Reply
#94
Aquinus
Resident Wat-man
SteevoSo 20W of fixed function (the essential definition of RISC ARM architecture) hardware on a 5nm node. It's impressive, but not ground breaking.
...but:
SearingMacbook Air base model seems to settle around 10.015 Watts at 2.650 Ghz while doing Cinebench R23 once. Then drops to around 8 Watts and 2.460 Ghz after a second round. Where is that guy saying 20+ watts, which should have been obviously ridiculous just by looking at the increases in battery life vs the Intel ones.

Posted on Reply
#95
seth1911
i think there is something not right with the anandtech test, cause the Apple IGP powers nearly a mobile GTX 1650 :kookoo:

Nvidia need 50w for 2,4 TFLOPS FP32 inc. 4GB GDDR5 and now Apple will do it similar with 7w on its IGP:shadedshu:


All are stupid,
Nvidia,
ARM with Mali,
Qualcomm with Adreno,
AMD wit RDNA

Only apple got the holy grail with theyr IGP to perform nearly 8 times better than anyone above :roll:
Posted on Reply
#96
dyonoctis
seth1911i think there is something not right with the anandtech test, cause the Apple IGP powers nearly a mobile GTX 1650 :kookoo:

Nvidia need 50w for 2,4 TFLOPS FP32 inc. 4GB GDDR5 and now Apple will do it similar with 7w on its IGP:shadedshu:


All are stupid,
Nvidia,
ARM with Mali,
Qualcomm with Adreno,
AMD wit RDNA

Only apple got the holy grail with theyr IGP to perform nearly 8 times better than anyone above :roll:
AMD is still using good ol vega for their IGP, lack of competion yada yada...now they might have the incentive to be more bleeding edge.
As for Nvidia, they don't develop for low power first, their small gpu are just a heavily cut down big gpu. Apple is doing the reverse, they started from something made for efficiency, and are making it bigger and bigger.

Their best low power soc is still the Tegra X1 from 2015 who's still based on maxwell. Them owning ARM won't have any effect until several years from now, but who know ? Nvidia has been marketing their mobile and desktop gpu as the holy grail of content creation so much, the revival of the mac might make them try harder
Posted on Reply
Add your own comment
Nov 21st, 2024 12:52 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts