• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

Primate Labs Launches Geekbench 6 with Modern Data Sets

TheLostSwede

News Editor
Joined
Nov 11, 2004
Messages
17,772 (2.42/day)
Location
Sweden
System Name Overlord Mk MLI
Processor AMD Ryzen 7 7800X3D
Motherboard Gigabyte X670E Aorus Master
Cooling Noctua NH-D15 SE with offsets
Memory 32GB Team T-Create Expert DDR5 6000 MHz @ CL30-34-34-68
Video Card(s) Gainward GeForce RTX 4080 Phantom GS
Storage 1TB Solidigm P44 Pro, 2 TB Corsair MP600 Pro, 2TB Kingston KC3000
Display(s) Acer XV272K LVbmiipruzx 4K@160Hz
Case Fractal Design Torrent Compact
Audio Device(s) Corsair Virtuoso SE
Power Supply be quiet! Pure Power 12 M 850 W
Mouse Logitech G502 Lightspeed
Keyboard Corsair K70 Max
Software Windows 10 Pro
Benchmark Scores https://valid.x86.fr/yfsd9w
Geekbench 6, the latest version of the best cross-platform benchmark, has arrived and is loaded with new and improved workloads to measure the performance of your CPUs and GPUs. Geekbench 6 is available for download today for Android, iOS, Windows, macOS, and Linux.

A lot has changed in the tech world in the past three years. Smartphone cameras take bigger and better pictures. Artificial intelligence, especially machine learning, has become ubiquitous in general and mobile applications. The number of cores in computers and mobile devices continues to rise. And how we interact with our computers and mobile devices has changed dramatically - who would have guessed that video conferencing would suddenly surge in 2020?




To keep up with these advancements, we've released Geekbench 6. This latest version of Geekbench has been designed with the modern user in mind, reflecting how we actually use our devices in 2023.

So, what's new in Geekbench 6? Let's take a look!

New and Updated Real-World Tests
Geekbench tests have always been grounded in real-world use cases and use modern. With Geekbench 6, we've taken this to the next level by updating existing workloads and designing several new workloads, including workloads that:
  • Blur backgrounds in video conferencing streams
  • Filter and adjust images for social media sites
  • Automatically remove unwanted objects from photos
  • Detect and tag objects in photos using machine learning models
  • Analyse, process, and convert text using scripting languages

Modern Data Sets
We also updated the datasets that the workloads process so they better align with the file types and sizes that are common today. This includes:
  • Higher-resolution photos in image tests
  • Larger maps in navigation tests
  • Larger, more complex documents in the PDF and HTML5 Browser tests
  • More (and larger) files in the developer tests

True-to-Life Scaling
The multi-core benchmark tests in Geekbench 6 have also undergone a significant overhaul. Rather than assigning separate tasks to each core, the tests now measure how cores cooperate to complete a shared task. This approach improves the relevance of the multi-core tests and is better suited to measuring heterogeneous core performance. This approach follows the growing trend of incorporating "performance" and "efficient" cores in desktops and laptops (not just smartphones and tablets).

Praise for Geekbench 6
Geekbench has long been the industry standard in benchmarking for customers and device manufacturers, used by semiconductor technology companies like Arm, chipset and CPU manufacturers like Qualcomm Technologies, Inc., MediaTek Inc., and AMD, device manufacturers like ASUS, Lenovo, Microsoft, Motorola, Vivo, and even by car manufacturers like Mercedes Benz AG.

"Geekbench has been and will continue to be an important benchmark that our teams have utilized in the architectural design and implementation of our Snapdragon platforms."
Qualcomm Technologies, Inc.

"Geekbench is heavily used by MediaTek for its easy access and fairness in comparing cross-platform results. R&D can put less effort into checking software differences on diverse processor architectures and pay more attention to identifying actual hardware bottlenecks. Geekbench 6 reduces system services' impact. This helps us and our customers better analyze the performance differences over the competition."
MediaTek Inc.

Launch Sale
We're celebrating the release of Geekbench 6 with a launch sale. From now until February 28, we're offering 20% off Geekbench 6 Pro on the Primate Labs Store.

Meanwhile, Geekbench 6 is free (and will remain free) for personal use.

Whether you're a tech enthusiast or in charge of a computer lab or IT department, Geekbench 6 is the benchmark tool you need. With its improvements in workload tests and multi-core measuring, you can be sure that you're getting accurate and reliable results that reflect how your computers and devices perform in real-world settings.

View at TechPowerUp Main Site | Source
 
Joined
Jan 8, 2017
Messages
9,505 (3.27/day)
System Name Good enough
Processor AMD Ryzen R9 7900 - Alphacool Eisblock XPX Aurora Edge
Motherboard ASRock B650 Pro RS
Cooling 2x 360mm NexXxoS ST30 X-Flow, 1x 360mm NexXxoS ST30, 1x 240mm NexXxoS ST30
Memory 32GB - FURY Beast RGB 5600 Mhz
Video Card(s) Sapphire RX 7900 XT - Alphacool Eisblock Aurora
Storage 1x Kingston KC3000 1TB 1x Kingston A2000 1TB, 1x Samsung 850 EVO 250GB , 1x Samsung 860 EVO 500GB
Display(s) LG UltraGear 32GN650-B + 4K Samsung TV
Case Phanteks NV7
Power Supply GPS-750C
Let me guess, Apple chips are suddenly a lot faster than x86 counterparts again ?
 
Joined
Mar 16, 2017
Messages
2,161 (0.76/day)
Location
Tanagra
System Name Budget Box
Processor Xeon E5-2667v2
Motherboard ASUS P9X79 Pro
Cooling Some cheap tower cooler, I dunno
Memory 32GB 1866-DDR3 ECC
Video Card(s) XFX RX 5600XT
Storage WD NVME 1GB
Display(s) ASUS Pro Art 27"
Case Antec P7 Neo
Let me guess, Apple chips are suddenly a lot faster than x86 counterparts again ?
Apple chips do really well in a lot of tasks, and some of that is due to specialized hardware and APIs that developers actually use to speed up processes. Do they have to be good at everything or just what most people actually do? Once you get into specific needs, you need specific hardware. And trust me, if the Apple chips actually sucked, you’d hear about it. Yes, they have loyal customers, but those same folks have no problem calling Apple out for crap designs, like bad keyboards, expensive wheels, throttling CPUs, and mice that can only be charged while upside down.
 
Joined
May 30, 2015
Messages
1,942 (0.56/day)
Location
Seattle, WA
Let me guess, Apple chips are suddenly a lot faster than x86 counterparts again ?

If you want x86 to look good in GeekBench, write a better malloc. Linux is already pretty close, but macOS is still the fastest. It is the primary reason why Ryzen "iMac" machines topped GeekBench 5 charts for Zen 3 scores, macOS is just built different.
 
Joined
Feb 10, 2023
Messages
283 (0.41/day)
Location
Lake Superior
Let me guess, Apple chips are suddenly a lot faster than x86 counterparts again ?
My 7950X scores over 3000 in single core.
I'm not sure the M2 gained on it much in single core.
But in multi-core the much higher inter-core memory bandwidth will cause the M1/M2 to gain relatively.
 
Low quality post by Space Lynx

Space Lynx

Astronaut
Joined
Oct 17, 2014
Messages
17,426 (4.68/day)
Location
Kepler-186f
Processor 7800X3D -25 all core
Motherboard B650 Steel Legend
Cooling Frost Commander 140
Video Card(s) Merc 310 7900 XT @3100 core -.75v
Display(s) Agon 27" QD-OLED Glossy 240hz 1440p
Case NZXT H710 (Red/Black)
Audio Device(s) Asgard 2, Modi 3, HD58X
Power Supply Corsair RM850x Gold
as the primates eating apple chips browsed the TPU forums, they came across an article, about Apple chips and primate labs - David Attenborough
 
Joined
Mar 17, 2011
Messages
159 (0.03/day)
Location
Christchurch, New Zealand
If you want x86 to look good in GeekBench, write a better malloc. Linux is already pretty close, but macOS is still the fastest. It is the primary reason why Ryzen "iMac" machines topped GeekBench 5 charts for Zen 3 scores, macOS is just built different.

What's being tested? Compiler efficiency or the CPU?
 
Last edited:
Joined
Apr 29, 2020
Messages
141 (0.08/day)
The main problem with Geekbench 5 was the short duration of each test which favoured more 'boosty' processors and flattered processors with poor cooling (as the tests didn't show any impact from thermal throttling), and the multicore test that just ran disparite tasks across each core and measured the total time taken, and therefore didn't reflect that some tasks don't scale across multiple cores well.

It seems they've fixed the latter, but has anyone tested if they've fixed the former? Does the test take significantly longer than Geekbench 5?
 
Joined
Jan 8, 2017
Messages
9,505 (3.27/day)
System Name Good enough
Processor AMD Ryzen R9 7900 - Alphacool Eisblock XPX Aurora Edge
Motherboard ASRock B650 Pro RS
Cooling 2x 360mm NexXxoS ST30 X-Flow, 1x 360mm NexXxoS ST30, 1x 240mm NexXxoS ST30
Memory 32GB - FURY Beast RGB 5600 Mhz
Video Card(s) Sapphire RX 7900 XT - Alphacool Eisblock Aurora
Storage 1x Kingston KC3000 1TB 1x Kingston A2000 1TB, 1x Samsung 850 EVO 250GB , 1x Samsung 860 EVO 500GB
Display(s) LG UltraGear 32GN650-B + 4K Samsung TV
Case Phanteks NV7
Power Supply GPS-750C
And trust me, if the Apple chips actually sucked, you’d hear about it.
No one says they suck, I am just talking about geekbench which is notoriously biased for apple's platform.

If you want x86 to look good in GeekBench, write a better malloc.
I seriously doubt that's particularly relevant here. By the way I can't find any info on malloc being slower on windows.

What's being tested? Compiler effieciency or the CPU?
It goes without saying that any cross platform CPU benchmark should not rely to any extensive degree on system calls.
 
Last edited:
Joined
Oct 15, 2019
Messages
588 (0.31/day)
It goes without saying that any cross platform CPU benchmark should not rely to any extensive degree on system calls.
to an extent yes. Though forcing a slower version of a library to ”even out” playforms is also stupid, considering that all software on that platform utilizes the fast version.

For good comparisons of the hw, we should run some native linux os on all test systems, with fastest compilers and libraries available for each hw platform. Because of different instruction sets, each compiler will have an edge in some cases, and some deficiencies in others. That’s just how it goes.

also, if you, for example, compare the media engines of different processors, like geekbench does in some tests, it makes sense that system calls are used.
 
Last edited:
Joined
May 30, 2015
Messages
1,942 (0.56/day)
Location
Seattle, WA
I seriously doubt that's particularly relevant here.

It's extremely relevant and it's absolutely a problem on Windows. Ask anyone in OS development and optimization about it, it's not a well guarded secret. The OSS guys have it figured out but they don't have the team cohesion that Apple engineers do, nor the funding. This isn't really an argument, it's a simple fact.

What's being tested? Compiler effieciency or the CPU?

Malloc, or memory allocator, is the last line of definition in the performance and efficiency for any CPU architecture. Your CPU design can load a trillion operations per second but if the memory allocator in software can only parse a a couple billion pointers and fails to dynamically allocate blocks as they open or close then your CPU is going to sit around waiting for something new to chew on. Instruction flow efficiency doesn't stop at the silicon.
 
Joined
Jan 8, 2017
Messages
9,505 (3.27/day)
System Name Good enough
Processor AMD Ryzen R9 7900 - Alphacool Eisblock XPX Aurora Edge
Motherboard ASRock B650 Pro RS
Cooling 2x 360mm NexXxoS ST30 X-Flow, 1x 360mm NexXxoS ST30, 1x 240mm NexXxoS ST30
Memory 32GB - FURY Beast RGB 5600 Mhz
Video Card(s) Sapphire RX 7900 XT - Alphacool Eisblock Aurora
Storage 1x Kingston KC3000 1TB 1x Kingston A2000 1TB, 1x Samsung 850 EVO 250GB , 1x Samsung 860 EVO 500GB
Display(s) LG UltraGear 32GN650-B + 4K Samsung TV
Case Phanteks NV7
Power Supply GPS-750C
It's extremely relevant and it's absolutely a problem on Windows. Ask anyone in OS development and optimization about it, it's not a well guarded secret. The OSS guys have it figured out but they don't have the team cohesion that Apple engineers do, nor the funding. This isn't really an argument, it's a simple fact.
Then just show me any kind of benchmark that shows this difference in performance, like I said this is the first time I am hearing about this. The reason I said it's irrelevant is because you're supposed to test the hardware not how efficient system calls are on that platform.

Malloc, or memory allocator, is the last line of definition in the performance and efficiency for any CPU architecture.
No it's not, the benchmark is supposed to test the performance level of the hardware not how effective the software implementation is. Do you not understand the point of such a benchmark ?
 
Joined
Oct 15, 2019
Messages
588 (0.31/day)
No it's not, the benchmark is supposed to test the performance level of the hardware not how effective the software implementation is. Do you not understand the point of such a benchmark ?
But the software is also different for different instruction sets.
 
Joined
Mar 17, 2011
Messages
159 (0.03/day)
Location
Christchurch, New Zealand
Malloc, or memory allocator, is the last line of definition in the performance and efficiency for any CPU architecture. Your CPU design can load a trillion operations per second but if the memory allocator in software can only parse a a couple billion pointers and fails to dynamically allocate blocks as they open or close then your CPU is going to sit around waiting for something new to chew on. Instruction flow efficiency doesn't stop at the silicon.

I realise that this is not an issue when the benchmark is confined to comparing CPUs with the same architecture, as the binary is the same. However, it becomes a big deal when benchmarking between architectures. I figured CPU benchmarks would be written in assembly language by people who have a fluent understanding of the architecture to avoid this problem. I don't see there being any better method of comparing performance between different CPU architectures. I used to have a fluent understanding of the MC68000 instruction set and it was *extremely* easy to write a function that would far out perform the same function compiled from a higher level language.
 
Joined
Jan 8, 2017
Messages
9,505 (3.27/day)
System Name Good enough
Processor AMD Ryzen R9 7900 - Alphacool Eisblock XPX Aurora Edge
Motherboard ASRock B650 Pro RS
Cooling 2x 360mm NexXxoS ST30 X-Flow, 1x 360mm NexXxoS ST30, 1x 240mm NexXxoS ST30
Memory 32GB - FURY Beast RGB 5600 Mhz
Video Card(s) Sapphire RX 7900 XT - Alphacool Eisblock Aurora
Storage 1x Kingston KC3000 1TB 1x Kingston A2000 1TB, 1x Samsung 850 EVO 250GB , 1x Samsung 860 EVO 500GB
Display(s) LG UltraGear 32GN650-B + 4K Samsung TV
Case Phanteks NV7
Power Supply GPS-750C
But the software is also different for different instruction sets.
Geekbench is a crappy cross platform benchmark for many reasons including this one but to be fair that's unavoidable.

I figured CPU benchmarks would be written in assembly language by people who have a fluent understanding of the architecture to avoid this problem. I don't see there being any better method of comparing performance between different CPU architectures.
No one is writing modern software in assembly these days, it'd be crazy. The problem in this case is that it would be x86 vs ARM, what do you do about SIMD extensions like NEON and AVX for example ? Do you use them, if so which ones and why.
 
Joined
Mar 16, 2017
Messages
2,161 (0.76/day)
Location
Tanagra
System Name Budget Box
Processor Xeon E5-2667v2
Motherboard ASUS P9X79 Pro
Cooling Some cheap tower cooler, I dunno
Memory 32GB 1866-DDR3 ECC
Video Card(s) XFX RX 5600XT
Storage WD NVME 1GB
Display(s) ASUS Pro Art 27"
Case Antec P7 Neo
I don’t know how you can do a “hardware level” benchmark and not factor in the OS and how it leverages that hardware. I’ve used this example before, but DxO PureRAW v1 uses GPU acceleration to apply AI noise reduction, and v2 uses the Neural Engine on M-chips. It takes a 20 second process down to 8 seconds, making the need for a powerful dedicated GPU unnecessary. Benchmark that however you want, but that difference was huge when you couldn’t find a good dedicated GPU to save your life not that long ago. Even now that you can, it’s not necessary to complete a demanding task. My point is that by having that dedicated hardware and a platform that makes it easy for developers to leverage, it didn’t matter if the GPU and CPU in the M1 were slower than something x86. Basically, no, a different OS running on M-chips would not perform as well, because Apple developed the hardware and software in parallel. Windows, Linux, and Android have to be able to work with whatever they get, which can be good or bad.
 
Joined
Mar 17, 2011
Messages
159 (0.03/day)
Location
Christchurch, New Zealand
Geekbench is a crappy cross platform benchmark for many reasons including this one but to be fair that's unavoidable.


No one is writing modern software in assembly these days, it'd be crazy. The problem in this case is that it would be x86 vs ARM, what do you do about SIMD extensions like NEON and AVX for example ? Do you use them, if so which ones and why.

But then performance comparisons between ARM and x64 is crazy when the result is dependent on compiler efficiency. Deciding on a task and writing it in the most optimised code possible seems a lot less crazy for a CPU benchmark. Those instructions should be used if it gets the job done faster.
 
Last edited:
Joined
Jan 8, 2017
Messages
9,505 (3.27/day)
System Name Good enough
Processor AMD Ryzen R9 7900 - Alphacool Eisblock XPX Aurora Edge
Motherboard ASRock B650 Pro RS
Cooling 2x 360mm NexXxoS ST30 X-Flow, 1x 360mm NexXxoS ST30, 1x 240mm NexXxoS ST30
Memory 32GB - FURY Beast RGB 5600 Mhz
Video Card(s) Sapphire RX 7900 XT - Alphacool Eisblock Aurora
Storage 1x Kingston KC3000 1TB 1x Kingston A2000 1TB, 1x Samsung 850 EVO 250GB , 1x Samsung 860 EVO 500GB
Display(s) LG UltraGear 32GN650-B + 4K Samsung TV
Case Phanteks NV7
Power Supply GPS-750C
I don’t know how you can do a “hardware level” benchmark and not factor in the OS and how it leverages that hardware.
If a malloc is much slower for whatever reason on a platform, then just don't do a million mallocs, no one says to not factor in the OS, the point is that this is not what you're testing for.

I’ve used this example before, but DxO PureRAW v1 uses GPU acceleration to apply AI noise reduction, and v2 uses the Neural Engine on M-chips. It takes a 20 second process down to 8 seconds, making the need for a powerful dedicated GPU unnecessary.
Fine but that wouldn't be a GPU benchmark, so comparing results from a platform that's using dedicated ML chips to a platform that isn't would obviously be stupid because one would be a NPU benchmark and the other a GPU benchmark.
Deciding on a task and writing it in the most optimised code possible seems a lot less crazy for a CPU benchmark.
Perhaps but no one is doing that.
 
Joined
May 30, 2015
Messages
1,942 (0.56/day)
Location
Seattle, WA
like I said this is the first time I am hearing about this.

So now is a good time to learn what malloc, realloc, calloc, etc are and how they influence and are influenced by operating systems.

No it's not, the benchmark is supposed to test the performance level of the hardware not how effective the software implementation is. Do you not understand the point of such a benchmark ?

I understand you're angry because you're trying to argue for the idealized situation, where the software can reach down and run at the machine level to extract values, but that's simply not how it works on these kinds of cross-platform programs. The reality is much more brutal; you are running a user-mode program, you need user-mode calls to attain memory for the program, if each operating system handles that memory differently it influences your program. Windows my only allow malloc to set limited dynamic block pointers and then require constantly clearing and reallocating those blocks, macOS may simply allow all dynamic pointers and do no block cleanup until after the blocks are destitute and cleared. However it works it's down to how the systems individually handle memory, and that greatly influences what the hardware is able to do.

Reach out to some software performance engineers and ask questions, but arguing about it is not changing the situation any.
 
Joined
Mar 17, 2011
Messages
159 (0.03/day)
Location
Christchurch, New Zealand
Perhaps but no one is doing that.

Lol. I assumed it was the norm. So, nobody does cross platform CPU benchmarks, only benchmarks that measure cross-compiler/OS efficiency which are called CPU benchmarks. That's a bit of a downer.
 
Last edited:
Joined
Jan 8, 2017
Messages
9,505 (3.27/day)
System Name Good enough
Processor AMD Ryzen R9 7900 - Alphacool Eisblock XPX Aurora Edge
Motherboard ASRock B650 Pro RS
Cooling 2x 360mm NexXxoS ST30 X-Flow, 1x 360mm NexXxoS ST30, 1x 240mm NexXxoS ST30
Memory 32GB - FURY Beast RGB 5600 Mhz
Video Card(s) Sapphire RX 7900 XT - Alphacool Eisblock Aurora
Storage 1x Kingston KC3000 1TB 1x Kingston A2000 1TB, 1x Samsung 850 EVO 250GB , 1x Samsung 860 EVO 500GB
Display(s) LG UltraGear 32GN650-B + 4K Samsung TV
Case Phanteks NV7
Power Supply GPS-750C
So now is a good time to learn what malloc, realloc, calloc, etc are and how they influence and are influenced by operating systems.
Actually it's a good time to admit you can't back up your claims.

None of those functions are influenced in any meaningful way by the operating system, malloc is implemented pretty much the same everywhere, what isn't the same is whatever system function it needs to call but those shouldn't differ much either.

I understand you're angry because you're trying to argue for the idealized situation
I am not arguing that at all, if you claim something is slower because of whatever implementation of malloc on windows then simply try to minimize it's use. You don't need to call malloc a million times, I don't see a "memory allocation" as one of the listed tests in geekbench, do you ? So I fail to see why should this be relevant to any degree.
 

Hugis

Moderator
Staff member
Joined
Mar 28, 2010
Messages
825 (0.15/day)
Location
Spain(Living) / UK(Born)
System Name Office / Gamer Mk IV
Processor i5 - 12500
Motherboard TUF GAMING B660-PLUS WIFI D4
Cooling Themalright Peerless Assassin 120 RGB
Memory 32GB (2x16) Corsair CMK32GX4M2D3600C18 "micron B die"
Video Card(s) UHD770 / PNY 4060Ti (www.techpowerup.com/review/pny-geforce-rtx-4060-ti-verto)
Storage SN850X - P41Plat - SN770 - 980Pro - BX500
Display(s) Philips 246E9Q 75Hz @ 1920 * 1080
Case Corsair Carbide 200R
Audio Device(s) Realtek ALC897 (On Board)
Power Supply Cooler Master V750 Gold v2
Mouse Cooler Master MM712
Keyboard Logitech S530 - mac
Software Windows 11 Pro
Ive extracted the .exe it works without installing .
If you want to try it out download the 7zip file, extract and run the geekbench6.exe , works and uploads results.


1676549842293.png



 
Joined
Mar 18, 2010
Messages
84 (0.02/day)
System Name Current
Processor AMD Ryzen 7 5800X3D
Motherboard Gigabyte Aorus X570 Elite
Cooling Full Waterloop
Memory 32Gb DDR4 3600 (4x8Gb)
Video Card(s) MSI RX6800XT Gaming X Trio 16Gb
Storage 2x SN850 1Tb & 1x Netac NV7000 2TB
Display(s) ASUS ROG Strix XG32V
Case NZXT H9 Flow
Audio Device(s) Dali Zensor 1 Speakers and Tangent Ampster BT II
Power Supply NZXT C850
Mouse G502 SE
Keyboard Currently: Monsgeek M1
Software Windows 11 64 Pro
Top