Tuesday, February 14th 2023

Primate Labs Launches Geekbench 6 with Modern Data Sets

Geekbench 6, the latest version of the best cross-platform benchmark, has arrived and is loaded with new and improved workloads to measure the performance of your CPUs and GPUs. Geekbench 6 is available for download today for Android, iOS, Windows, macOS, and Linux.

A lot has changed in the tech world in the past three years. Smartphone cameras take bigger and better pictures. Artificial intelligence, especially machine learning, has become ubiquitous in general and mobile applications. The number of cores in computers and mobile devices continues to rise. And how we interact with our computers and mobile devices has changed dramatically - who would have guessed that video conferencing would suddenly surge in 2020?
To keep up with these advancements, we've released Geekbench 6. This latest version of Geekbench has been designed with the modern user in mind, reflecting how we actually use our devices in 2023.

So, what's new in Geekbench 6? Let's take a look!

New and Updated Real-World Tests
Geekbench tests have always been grounded in real-world use cases and use modern. With Geekbench 6, we've taken this to the next level by updating existing workloads and designing several new workloads, including workloads that:
  • Blur backgrounds in video conferencing streams
  • Filter and adjust images for social media sites
  • Automatically remove unwanted objects from photos
  • Detect and tag objects in photos using machine learning models
  • Analyse, process, and convert text using scripting languages
Modern Data Sets
We also updated the datasets that the workloads process so they better align with the file types and sizes that are common today. This includes:
  • Higher-resolution photos in image tests
  • Larger maps in navigation tests
  • Larger, more complex documents in the PDF and HTML5 Browser tests
  • More (and larger) files in the developer tests
True-to-Life Scaling
The multi-core benchmark tests in Geekbench 6 have also undergone a significant overhaul. Rather than assigning separate tasks to each core, the tests now measure how cores cooperate to complete a shared task. This approach improves the relevance of the multi-core tests and is better suited to measuring heterogeneous core performance. This approach follows the growing trend of incorporating "performance" and "efficient" cores in desktops and laptops (not just smartphones and tablets).

Praise for Geekbench 6
Geekbench has long been the industry standard in benchmarking for customers and device manufacturers, used by semiconductor technology companies like Arm, chipset and CPU manufacturers like Qualcomm Technologies, Inc., MediaTek Inc., and AMD, device manufacturers like ASUS, Lenovo, Microsoft, Motorola, Vivo, and even by car manufacturers like Mercedes Benz AG.

"Geekbench has been and will continue to be an important benchmark that our teams have utilized in the architectural design and implementation of our Snapdragon platforms."
Qualcomm Technologies, Inc.

"Geekbench is heavily used by MediaTek for its easy access and fairness in comparing cross-platform results. R&D can put less effort into checking software differences on diverse processor architectures and pay more attention to identifying actual hardware bottlenecks. Geekbench 6 reduces system services' impact. This helps us and our customers better analyze the performance differences over the competition."
MediaTek Inc.

Launch Sale
We're celebrating the release of Geekbench 6 with a launch sale. From now until February 28, we're offering 20% off Geekbench 6 Pro on the Primate Labs Store.

Meanwhile, Geekbench 6 is free (and will remain free) for personal use.

Whether you're a tech enthusiast or in charge of a computer lab or IT department, Geekbench 6 is the benchmark tool you need. With its improvements in workload tests and multi-core measuring, you can be sure that you're getting accurate and reliable results that reflect how your computers and devices perform in real-world settings.
Source: Primate Labs
Add your own comment

24 Comments on Primate Labs Launches Geekbench 6 with Modern Data Sets

#1
Vya Domus
Let me guess, Apple chips are suddenly a lot faster than x86 counterparts again ?
Posted on Reply
#2
Darmok N Jalad
Vya DomusLet me guess, Apple chips are suddenly a lot faster than x86 counterparts again ?
Apple chips do really well in a lot of tasks, and some of that is due to specialized hardware and APIs that developers actually use to speed up processes. Do they have to be good at everything or just what most people actually do? Once you get into specific needs, you need specific hardware. And trust me, if the Apple chips actually sucked, you’d hear about it. Yes, they have loyal customers, but those same folks have no problem calling Apple out for crap designs, like bad keyboards, expensive wheels, throttling CPUs, and mice that can only be charged while upside down.
Posted on Reply
#3
Fouquin
Vya DomusLet me guess, Apple chips are suddenly a lot faster than x86 counterparts again ?
If you want x86 to look good in GeekBench, write a better malloc. Linux is already pretty close, but macOS is still the fastest. It is the primary reason why Ryzen "iMac" machines topped GeekBench 5 charts for Zen 3 scores, macOS is just built different.
Posted on Reply
#4
JohH
Vya DomusLet me guess, Apple chips are suddenly a lot faster than x86 counterparts again ?
My 7950X scores over 3000 in single core.
I'm not sure the M2 gained on it much in single core.
But in multi-core the much higher inter-core memory bandwidth will cause the M1/M2 to gain relatively.
Posted on Reply
#5
Denver
Who will be favored this time...
Posted on Reply
#6
zlobby
DenverWho will be favored this time...
Judging by the article, probably Mediatek.
Posted on Reply
#7
Fierce Guppy
FouquinIf you want x86 to look good in GeekBench, write a better malloc. Linux is already pretty close, but macOS is still the fastest. It is the primary reason why Ryzen "iMac" machines topped GeekBench 5 charts for Zen 3 scores, macOS is just built different.
What's being tested? Compiler efficiency or the CPU?
Posted on Reply
#8
Tom Yum
The main problem with Geekbench 5 was the short duration of each test which favoured more 'boosty' processors and flattered processors with poor cooling (as the tests didn't show any impact from thermal throttling), and the multicore test that just ran disparite tasks across each core and measured the total time taken, and therefore didn't reflect that some tasks don't scale across multiple cores well.

It seems they've fixed the latter, but has anyone tested if they've fixed the former? Does the test take significantly longer than Geekbench 5?
Posted on Reply
#9
Vya Domus
Darmok N JaladAnd trust me, if the Apple chips actually sucked, you’d hear about it.
No one says they suck, I am just talking about geekbench which is notoriously biased for apple's platform.
FouquinIf you want x86 to look good in GeekBench, write a better malloc.
I seriously doubt that's particularly relevant here. By the way I can't find any info on malloc being slower on windows.
Fierce GuppyWhat's being tested? Compiler effieciency or the CPU?
It goes without saying that any cross platform CPU benchmark should not rely to any extensive degree on system calls.
Posted on Reply
#10
Dredi
Vya DomusIt goes without saying that any cross platform CPU benchmark should not rely to any extensive degree on system calls.
to an extent yes. Though forcing a slower version of a library to ”even out” playforms is also stupid, considering that all software on that platform utilizes the fast version.

For good comparisons of the hw, we should run some native linux os on all test systems, with fastest compilers and libraries available for each hw platform. Because of different instruction sets, each compiler will have an edge in some cases, and some deficiencies in others. That’s just how it goes.

also, if you, for example, compare the media engines of different processors, like geekbench does in some tests, it makes sense that system calls are used.
Posted on Reply
#11
Fouquin
Vya DomusI seriously doubt that's particularly relevant here.
It's extremely relevant and it's absolutely a problem on Windows. Ask anyone in OS development and optimization about it, it's not a well guarded secret. The OSS guys have it figured out but they don't have the team cohesion that Apple engineers do, nor the funding. This isn't really an argument, it's a simple fact.
Fierce GuppyWhat's being tested? Compiler effieciency or the CPU?
Malloc, or memory allocator, is the last line of definition in the performance and efficiency for any CPU architecture. Your CPU design can load a trillion operations per second but if the memory allocator in software can only parse a a couple billion pointers and fails to dynamically allocate blocks as they open or close then your CPU is going to sit around waiting for something new to chew on. Instruction flow efficiency doesn't stop at the silicon.
Posted on Reply
#12
Vya Domus
FouquinIt's extremely relevant and it's absolutely a problem on Windows. Ask anyone in OS development and optimization about it, it's not a well guarded secret. The OSS guys have it figured out but they don't have the team cohesion that Apple engineers do, nor the funding. This isn't really an argument, it's a simple fact.
Then just show me any kind of benchmark that shows this difference in performance, like I said this is the first time I am hearing about this. The reason I said it's irrelevant is because you're supposed to test the hardware not how efficient system calls are on that platform.
FouquinMalloc, or memory allocator, is the last line of definition in the performance and efficiency for any CPU architecture.
No it's not, the benchmark is supposed to test the performance level of the hardware not how effective the software implementation is. Do you not understand the point of such a benchmark ?
Posted on Reply
#13
Dredi
Vya DomusNo it's not, the benchmark is supposed to test the performance level of the hardware not how effective the software implementation is. Do you not understand the point of such a benchmark ?
But the software is also different for different instruction sets.
Posted on Reply
#14
Fierce Guppy
FouquinMalloc, or memory allocator, is the last line of definition in the performance and efficiency for any CPU architecture. Your CPU design can load a trillion operations per second but if the memory allocator in software can only parse a a couple billion pointers and fails to dynamically allocate blocks as they open or close then your CPU is going to sit around waiting for something new to chew on. Instruction flow efficiency doesn't stop at the silicon.
I realise that this is not an issue when the benchmark is confined to comparing CPUs with the same architecture, as the binary is the same. However, it becomes a big deal when benchmarking between architectures. I figured CPU benchmarks would be written in assembly language by people who have a fluent understanding of the architecture to avoid this problem. I don't see there being any better method of comparing performance between different CPU architectures. I used to have a fluent understanding of the MC68000 instruction set and it was *extremely* easy to write a function that would far out perform the same function compiled from a higher level language.
Posted on Reply
#15
Vya Domus
DrediBut the software is also different for different instruction sets.
Geekbench is a crappy cross platform benchmark for many reasons including this one but to be fair that's unavoidable.
Fierce GuppyI figured CPU benchmarks would be written in assembly language by people who have a fluent understanding of the architecture to avoid this problem. I don't see there being any better method of comparing performance between different CPU architectures.
No one is writing modern software in assembly these days, it'd be crazy. The problem in this case is that it would be x86 vs ARM, what do you do about SIMD extensions like NEON and AVX for example ? Do you use them, if so which ones and why.
Posted on Reply
#16
Darmok N Jalad
I don’t know how you can do a “hardware level” benchmark and not factor in the OS and how it leverages that hardware. I’ve used this example before, but DxO PureRAW v1 uses GPU acceleration to apply AI noise reduction, and v2 uses the Neural Engine on M-chips. It takes a 20 second process down to 8 seconds, making the need for a powerful dedicated GPU unnecessary. Benchmark that however you want, but that difference was huge when you couldn’t find a good dedicated GPU to save your life not that long ago. Even now that you can, it’s not necessary to complete a demanding task. My point is that by having that dedicated hardware and a platform that makes it easy for developers to leverage, it didn’t matter if the GPU and CPU in the M1 were slower than something x86. Basically, no, a different OS running on M-chips would not perform as well, because Apple developed the hardware and software in parallel. Windows, Linux, and Android have to be able to work with whatever they get, which can be good or bad.
Posted on Reply
#17
Fierce Guppy
Vya DomusGeekbench is a crappy cross platform benchmark for many reasons including this one but to be fair that's unavoidable.


No one is writing modern software in assembly these days, it'd be crazy. The problem in this case is that it would be x86 vs ARM, what do you do about SIMD extensions like NEON and AVX for example ? Do you use them, if so which ones and why.
But then performance comparisons between ARM and x64 is crazy when the result is dependent on compiler efficiency. Deciding on a task and writing it in the most optimised code possible seems a lot less crazy for a CPU benchmark. Those instructions should be used if it gets the job done faster.
Posted on Reply
#18
Vya Domus
Darmok N JaladI don’t know how you can do a “hardware level” benchmark and not factor in the OS and how it leverages that hardware.
If a malloc is much slower for whatever reason on a platform, then just don't do a million mallocs, no one says to not factor in the OS, the point is that this is not what you're testing for.
Darmok N JaladI’ve used this example before, but DxO PureRAW v1 uses GPU acceleration to apply AI noise reduction, and v2 uses the Neural Engine on M-chips. It takes a 20 second process down to 8 seconds, making the need for a powerful dedicated GPU unnecessary.
Fine but that wouldn't be a GPU benchmark, so comparing results from a platform that's using dedicated ML chips to a platform that isn't would obviously be stupid because one would be a NPU benchmark and the other a GPU benchmark.
Fierce GuppyDeciding on a task and writing it in the most optimised code possible seems a lot less crazy for a CPU benchmark.
Perhaps but no one is doing that.
Posted on Reply
#19
Fouquin
Vya Domuslike I said this is the first time I am hearing about this.
So now is a good time to learn what malloc, realloc, calloc, etc are and how they influence and are influenced by operating systems.
Vya DomusNo it's not, the benchmark is supposed to test the performance level of the hardware not how effective the software implementation is. Do you not understand the point of such a benchmark ?
I understand you're angry because you're trying to argue for the idealized situation, where the software can reach down and run at the machine level to extract values, but that's simply not how it works on these kinds of cross-platform programs. The reality is much more brutal; you are running a user-mode program, you need user-mode calls to attain memory for the program, if each operating system handles that memory differently it influences your program. Windows my only allow malloc to set limited dynamic block pointers and then require constantly clearing and reallocating those blocks, macOS may simply allow all dynamic pointers and do no block cleanup until after the blocks are destitute and cleared. However it works it's down to how the systems individually handle memory, and that greatly influences what the hardware is able to do.

Reach out to some software performance engineers and ask questions, but arguing about it is not changing the situation any.
Posted on Reply
#20
Fierce Guppy
Vya DomusPerhaps but no one is doing that.
Lol. I assumed it was the norm. So, nobody does cross platform CPU benchmarks, only benchmarks that measure cross-compiler/OS efficiency which are called CPU benchmarks. That's a bit of a downer.
Posted on Reply
#21
Vya Domus
FouquinSo now is a good time to learn what malloc, realloc, calloc, etc are and how they influence and are influenced by operating systems.
Actually it's a good time to admit you can't back up your claims.

None of those functions are influenced in any meaningful way by the operating system, malloc is implemented pretty much the same everywhere, what isn't the same is whatever system function it needs to call but those shouldn't differ much either.
FouquinI understand you're angry because you're trying to argue for the idealized situation
I am not arguing that at all, if you claim something is slower because of whatever implementation of malloc on windows then simply try to minimize it's use. You don't need to call malloc a million times, I don't see a "memory allocation" as one of the listed tests in geekbench, do you ? So I fail to see why should this be relevant to any degree.
Posted on Reply
Add your own comment
Dec 22nd, 2024 06:52 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts