# Post Your Linpack Score



## Regeneration (Sep 10, 2018)

In case you missed it, the latest version of Linpack Xtreme has a benchmarking feature.

I'm wondering... how many GFlops your getting, how long in time/s, and with what hardware.


----------



## dcf-joe (Sep 10, 2018)

I will run this later for you on the PC in my specs.


----------



## kastriot (Sep 10, 2018)

Here you go:

4670K@4.4GHz


----------



## dcf-joe (Sep 11, 2018)

The first time that I ran the benchmark, it went to the main menu after the benchmark completed. The second time, it allowed me to capture the scores.

i7 2600K @ 4.8 GHz


----------



## hat (Sep 11, 2018)

Stock speed i5 2400 reporting in


----------



## chaosmassive (Sep 11, 2018)

i5-2500 @ 3.3 Ghz


----------



## basco (Sep 11, 2018)

5960x8@4000mhz(HT=off)-Nb=3600mhz-32gb\ss-2666c12


----------



## Mussels (Sep 11, 2018)

Edit: this was an anomaly, higher score posted on page 2
Linpack never scores high on ryzen


----------



## Zyll Goliat (Sep 11, 2018)

Ok here is the results for my Xeon E5645 OC 4,140Ghz





And here is the result for the Rig that I build few days ago now trying to sell this "oldish"1156/I7 870 on stock speed


----------



## Nuckles56 (Sep 11, 2018)

My stock i5 6500 with 2133MHz RAM


----------



## er557 (Sep 11, 2018)

seems a bit low, have been getting 750gflops in linx 0.7.1 mkl update 2


----------



## Peter Lindgren (Sep 11, 2018)

2680v2@3.5GHz


----------



## Tomgang (Sep 11, 2018)

I7 980X @ 4.42 GHz and memory clock in at 1416 MHz.


----------



## Arctucas (Sep 11, 2018)

6700K stock:






6700K @ 4874MHz:


----------



## MrGenius (Sep 12, 2018)

3770K @ 5.1GHz + 2400MHz DDR3 10-12-12-32 1T


----------



## xkm1948 (Sep 12, 2018)




----------



## hat (Sep 12, 2018)

Why are those Skylake systems so much faster? AVX2?


----------



## Regeneration (Sep 12, 2018)

AVX, AVX2, AVX-512, and 20 threads CPUs.


----------



## AlwaysHope (Sep 12, 2018)

Beware the mighty power of steamroller architecture circa 2014....





0.9 version only using 6 threads on 2600X @ stock clocks with XFR2 enabled with AMD highest official DDR4 support speed.





Interestingly using IBT v2.54 was getting over 157 Gflops @ same CPU clocks & ram bandwidth.


----------



## biffzinker (Sep 12, 2018)

Ryzen 2600X all cores 4.0 GHz


----------



## Mussels (Sep 12, 2018)

interesting that AVX performance is drastically improved from 1st gen ryzen to second gen, yours scores a lot higher than mine despite a small clock speed difference, AND you have less cores


----------



## biffzinker (Sep 12, 2018)

Sorry Mussels, I posted the x86 build but here is the x64 build results.


----------



## Deleted member 178884 (Sep 12, 2018)

Mussels said:


> drastically improved from 1st gen ryzen to second gen


As expected, after all the 2nd gen was all about improving that single thread and multi thread performance.


----------



## Mussels (Sep 13, 2018)

thats odd that x64 was so much slower

x86 wont even run for me, errors about not enough ram.


----------



## Athlonite (Sep 13, 2018)

x64 linpak on my crappy ole FX8320 @ 3.8GHz


----------



## biffzinker (Sep 21, 2018)

Linpack Xtreme v0.9.3


----------



## Regeneration (Sep 21, 2018)

Xeon W3680 4.35 GHz, ASUS P6T (X58), 12GB of DDR3 2300MHz.


----------



## Athlonite (Sep 22, 2018)

biffzinker said:


> Sorry Mussels, I posted the x86 build but here is the x64 build results.View attachment 106765



SHOOT how is it that your CPU scores lower than an FX8320 NVM I see with the latest update your cpu now stretches it's much younger legs better than my geriatric cpu but even with the new update I now get a good boost to up from 76GFlops to 81GFlops


----------



## freeagent (Sep 22, 2018)




----------



## Mussels (Sep 22, 2018)

My results are a lot faster today, unsure if because of newer version or something that was running in the background last time


----------



## Athlonite (Sep 22, 2018)

@Mussels I'd say its new something as I get the same effect 0.91 gave me 76 GFlops and 0.93 gives me 81 GFlops


----------



## Regeneration (Sep 22, 2018)

Linpack Xtreme v0.9.3 brings AVX support for all AMD CPUs. Benchmark results are better, more accurate, and stress testing is a lot more demanding.


----------



## Mussels (Sep 22, 2018)

explains that, and performance is more inline with where i expected it to be


----------



## biffzinker (Sep 23, 2018)

What about AVX2? I'd be curious what the performance impact is in regards to the splitting into 128bit chunks.


----------



## AlwaysHope (Sep 24, 2018)

CPU@defaults + XFR2 enabled, ram is holding me back...





Btw, Does the score in stress test matter? because it is higher @ same clocks.





I notice at end of stress test, no option to close app except clicking on x in corner of window, but even then, according to my power meter, system was still drawing same power level at end of stress test even when finished.


----------



## Athlonite (Sep 24, 2018)

AlwaysHope said:


> I notice at end of stress test, no option to close app except clicking on x in corner of window, but even then, according to my power meter, system was still drawing same power level at end of stress test even when finished.



at the end of the test when it says "*Press any key to continue*" do so then when it brings you back to the menu options page press 4 and the app will exit


----------



## AlwaysHope (Sep 25, 2018)

Athlonite said:


> at the end of the test when it says "*Press any key to continue*" do so then when it brings you back to the menu options page press 4 and the app will exit


I saw something flash up, but it's way too quick.


----------



## Hardi (Sep 25, 2018)




----------



## Athlonite (Sep 26, 2018)

AlwaysHope said:


> I saw something flash up, but it's way too quick.



then you have some other weird problem then because it should show the press any key to continue just like it shows in the screen shot posted on here hmm can you try maximising the window and see it shows up at the bottom


----------



## AlwaysHope (Sep 26, 2018)

Athlonite said:


> then you have some other weird problem then because it should show the press any key to continue just like it shows in the screen shot posted on here hmm can you try maximising the window and see it shows up at the bottom



Yep, your right, think I need bigger monitor. 

Was testing RAM OC, made window bit bigger than b4...


----------



## Mussels (Sep 26, 2018)

My PC got faster through magic





edit: added more secret sauce


----------



## agent_x007 (Sep 26, 2018)

Wish there was a table to compare scores...


----------



## Deleted member 178884 (Oct 7, 2018)

7740x @ 5ghz delidded 1.3v


----------



## Mussels (Oct 13, 2018)

slowly getting my performance where it should be


----------



## Regeneration (Nov 3, 2018)

How's the results on Ryzen with the latest version?


----------



## baryluk (Nov 3, 2018)

AMD 2950X @ stock

398 GFLOPS with 16 threads.
360 GFLOPS with 32 threads.

404 GFLOPS peak in first trial, because CPU was a bit colder ;D


```
Linpack Xtreme for Linux by Regeneration

Current date/time: Sat Nov  3 23:36:39 2018

CPU frequency:    4.398 GHz
Number of CPUs: 1
Number of cores: 16
Number of threads: 16

Parameters are set to:

Number of tests: 1
Number of equations to solve (problem size) : 14200
Leading dimension of array                  : 14200
Number of trials to run                     : 40
Data alignment value (in Kbytes)            : 4

Maximum memory requested that can be used=1613408096, at the size=14200

=================== Timing linear equation system solver ===================

Size   LDA    Align. Time(s)    GFlops   Residual     Residual(norm) Check
14200  14200  4      4.726      403.9676 1.763539e-10 3.096172e-02   pass
14200  14200  4      4.736      403.1583 1.763539e-10 3.096172e-02   pass
14200  14200  4      4.755      401.5681 1.763539e-10 3.096172e-02   pass
14200  14200  4      4.737      403.0120 1.763539e-10 3.096172e-02   pass
14200  14200  4      4.741      402.7017 1.763539e-10 3.096172e-02   pass
14200  14200  4      4.752      401.7499 1.763539e-10 3.096172e-02   pass
14200  14200  4      4.772      400.0589 1.763539e-10 3.096172e-02   pass
14200  14200  4      4.773      400.0034 1.763539e-10 3.096172e-02   pass
14200  14200  4      4.788      398.7966 1.763539e-10 3.096172e-02   pass
14200  14200  4      4.810      396.9042 1.763539e-10 3.096172e-02   pass
14200  14200  4      4.814      396.5671 1.763539e-10 3.096172e-02   pass
14200  14200  4      4.797      398.0441 1.763539e-10 3.096172e-02   pass
14200  14200  4      4.808      397.0743 1.763539e-10 3.096172e-02   pass
14200  14200  4      4.828      395.4243 1.763539e-10 3.096172e-02   pass
14200  14200  4      4.792      398.4525 1.763539e-10 3.096172e-02   pass
14200  14200  4      4.797      398.0257 1.763539e-10 3.096172e-02   pass
14200  14200  4      4.793      398.3593 1.763539e-10 3.096172e-02   pass
14200  14200  4      4.810      396.9206 1.763539e-10 3.096172e-02   pass
14200  14200  4      4.805      397.3818 1.763539e-10 3.096172e-02   pass
14200  14200  4      4.800      397.7804 1.763539e-10 3.096172e-02   pass
14200  14200  4      4.804      397.4225 1.763539e-10 3.096172e-02   pass
14200  14200  4      4.811      396.8887 1.763539e-10 3.096172e-02   pass
14200  14200  4      4.815      396.5166 1.763539e-10 3.096172e-02   pass
14200  14200  4      4.804      397.3918 1.763539e-10 3.096172e-02   pass
14200  14200  4      4.789      398.6798 1.763539e-10 3.096172e-02   pass
14200  14200  4      4.822      395.9188 1.763539e-10 3.096172e-02   pass
14200  14200  4      4.809      397.0046 1.763539e-10 3.096172e-02   pass
14200  14200  4      4.811      396.8418 1.763539e-10 3.096172e-02   pass
14200  14200  4      4.809      397.0436 1.763539e-10 3.096172e-02   pass
14200  14200  4      4.808      397.1259 1.763539e-10 3.096172e-02   pass
14200  14200  4      4.839      394.5228 1.763539e-10 3.096172e-02   pass
14200  14200  4      4.829      395.3860 1.763539e-10 3.096172e-02   pass
14200  14200  4      4.831      395.2255 1.763539e-10 3.096172e-02   pass
14200  14200  4      4.826      395.6417 1.763539e-10 3.096172e-02   pass
14200  14200  4      4.825      395.7011 1.763539e-10 3.096172e-02   pass
14200  14200  4      4.820      396.0932 1.763539e-10 3.096172e-02   pass
14200  14200  4      4.814      396.6361 1.763539e-10 3.096172e-02   pass
14200  14200  4      4.841      394.4190 1.763539e-10 3.096172e-02   pass
14200  14200  4      4.826      395.6037 1.763539e-10 3.096172e-02   pass
14200  14200  4      4.828      395.4471 1.763539e-10 3.096172e-02   pass

Performance Summary (GFlops)

Size   LDA    Align.  Average  Maximal
14200  14200  4       397.7865 403.9676

Residual checks PASSED

End of tests
```

And for some performance freaks:

```
Performance counter stats for 'env LD_PRELOAD=libhugetlbfs-2.20.so ./xlinpack_amd64 settings':

                 3      syscalls:sys_enter_statfs #    0.000 K/sec              
                 2      syscalls:sys_enter_unlink #    0.000 K/sec              
                 1      syscalls:sys_enter_execve #    0.000 K/sec              
                 7      syscalls:sys_enter_newstat #    0.000 K/sec              
                17      syscalls:sys_enter_newfstat #    0.000 K/sec              
                40      syscalls:sys_enter_lseek  #    0.000 K/sec              
               101      syscalls:sys_enter_read   #    0.000 K/sec              
               122      syscalls:sys_enter_write  #    0.000 K/sec              
                12      syscalls:sys_enter_access #    0.000 K/sec              
                35      syscalls:sys_enter_openat #    0.000 K/sec              
                18      syscalls:sys_enter_close  #    0.000 K/sec              
                16      syscalls:sys_enter_madvise #    0.000 K/sec              
                45      syscalls:sys_enter_mprotect #    0.000 K/sec              
                 8      syscalls:sys_enter_brk    #    0.000 K/sec              
                27      syscalls:sys_enter_munmap #    0.000 K/sec              
                19      syscalls:sys_enter_set_robust_list #    0.000 K/sec              
             5,106      syscalls:sys_enter_futex  #    0.001 K/sec              
                33      syscalls:sys_enter_nanosleep #    0.000 K/sec              
               246      syscalls:sys_enter_sched_setaffinity #    0.000 K/sec              
               102      syscalls:sys_enter_sched_getaffinity #    0.000 K/sec              
     2,166,340,919      syscalls:sys_enter_sched_yield #    0.270 M/sec              
                 6      syscalls:sys_enter_getpid #    0.000 K/sec              
                 2      syscalls:sys_enter_newuname #    0.000 K/sec              
                 1      syscalls:sys_enter_prlimit64 #    0.000 K/sec              
                 1      syscalls:sys_enter_sysinfo #    0.000 K/sec              
                 1      syscalls:sys_enter_rt_sigprocmask #    0.000 K/sec              
                 2      syscalls:sys_enter_tgkill #    0.000 K/sec              
                13      syscalls:sys_enter_rt_sigaction #    0.000 K/sec              
                16      syscalls:sys_enter_exit   #    0.000 K/sec              
                 3      syscalls:sys_enter_exit_group #    0.000 K/sec              
                 2      syscalls:sys_enter_wait4  #    0.000 K/sec              
                 1      syscalls:sys_enter_set_tid_address #    0.000 K/sec              
                18      syscalls:sys_enter_clone  #    0.000 K/sec              
               133      syscalls:sys_enter_mmap   #    0.000 K/sec              
                 2      syscalls:sys_enter_arch_prctl #    0.000 K/sec              
    8036802.733872      task-clock:u (msec)       #   15.862 CPUs utilized      
           696,972      context-switches          #    0.087 K/sec              
                 0      cpu-migrations:u          #    0.000 K/sec              
13,440,588,868,208      branch-instructions       # 1672.230 M/sec                    (19.23%)
    12,112,204,894      branch-misses             #    0.09% of all branches          (19.23%)
                 0      cache-misses              #    0.000 % of all cache refs      (19.23%)
                 0      cache-references          #    0.000 K/sec                    (19.23%)
32,662,980,781,943      cpu-cycles                #    4.064 GHz                      (19.23%)
58,079,500,839,239      instructions              #    1.78  insn per cycle     
                                                  #    0.15  stalled cycles per insn  (19.23%)
8,810,743,392,517      stalled-cycles-backend    #   26.97% backend cycles idle      (19.23%)
3,331,568,847,516      stalled-cycles-frontend   #   10.20% frontend cycles idle     (19.23%)
                 0      alignment-faults          #    0.000 K/sec              
                 0      bpf-output                #    0.000 K/sec              
           696,972      context-switches          #    0.087 K/sec              
    8038963.450391      cpu-clock (msec)          #   15.866 CPUs utilized      
             2,030      cpu-migrations            #    0.000 K/sec              
                 0      emulation-faults          #    0.000 K/sec              
                 0      major-faults              #    0.000 K/sec              
             4,772      minor-faults              #    0.001 K/sec              
             4,774      page-faults               #    0.001 K/sec              
    8036802.733872      task-clock (msec)         #   15.862 CPUs utilized      
    11,842,785,732      L1-dcache-load-misses     #    0.05% of all L1-dcache hits    (19.23%)
23,475,424,450,670      L1-dcache-loads           # 2920.729 M/sec                    (19.23%)
           900,607      L1-dcache-prefetch-misses #    0.112 K/sec                    (19.23%)
           747,894      L1-dcache-prefetches      #    0.093 K/sec                    (19.23%)
     3,219,345,501      L1-icache-load-misses     #    0.37% of all L1-icache hits    (19.23%)
   861,277,017,390      L1-icache-loads           #  107.157 M/sec                    (19.23%)
1,589,073,353,317      L1-icache-prefetches      #  197.707 M/sec                    (19.23%)
                 0      LLC-load-misses           #    0.00% of all LL-cache hits     (19.23%)
                 0      LLC-loads                 #    0.000 K/sec                    (19.23%)
                 0      LLC-stores                #    0.000 K/sec                    (19.23%)
    12,094,349,323      branch-load-misses        #    1.505 M/sec                    (19.23%)
13,440,932,765,508      branch-loads              # 1672.273 M/sec                    (19.23%)
        31,000,060      dTLB-load-misses          #    0.00% of all dTLB cache hits   (19.23%)
23,469,596,011,181      dTLB-loads                # 2920.004 M/sec                    (19.23%)
            24,488      iTLB-load-misses          #    0.00% of all iTLB cache hits   (19.23%)
   861,882,732,617      iTLB-loads                #  107.232 M/sec                    (19.23%)
                 0      node-load-misses          #    0.000 K/sec                    (19.23%)
                 0      node-loads                #    0.000 K/sec                    (19.23%)
                12      exceptions:page_fault_kernel #    0.000 K/sec              
             4,762      exceptions:page_fault_user #    0.001 K/sec              
               794      irq:irq_handler_entry     #    0.000 K/sec              
         1,529,713      irq:softirq_entry         #    0.190 K/sec              
             1,756      kmem:kfree                #    0.000 K/sec              
             8,017      kmem:kmalloc              #    0.001 K/sec              
                57      kmem:kmalloc_node         #    0.000 K/sec              
               866      kmem:kmem_cache_alloc     #    0.000 K/sec              
                23      kmem:kmem_cache_alloc_node #    0.000 K/sec              
               795      kmem:kmem_cache_free      #    0.000 K/sec              
             5,840      kmem:mm_page_alloc        #    0.001 K/sec              
                 0      kmem:mm_page_alloc_extfrag #    0.000 K/sec              
             4,566      kmem:mm_page_alloc_zone_locked #    0.001 K/sec              
             5,329      kmem:mm_page_free         #    0.001 K/sec              
             3,677      kmem:mm_page_free_batched #    0.000 K/sec              
             4,278      kmem:mm_page_pcpu_drain   #    0.001 K/sec              
                 0      mce:mce_record            #    0.000 K/sec              
                 0      migrate:mm_migrate_pages  #    0.000 K/sec              
                 0      migrate:mm_numa_migrate_ratelimit #    0.000 K/sec              
                18      task:task_newtask         #    0.000 K/sec              
                 2      task:task_rename          #    0.000 K/sec              
         4,053,333      timer:hrtimer_cancel      #    0.504 K/sec              
         4,053,333      timer:hrtimer_expire_entry #    0.504 K/sec              
         4,053,333      timer:hrtimer_expire_exit #    0.504 K/sec              
             2,600      timer:hrtimer_init        #    0.000 K/sec              
         4,055,523      timer:hrtimer_start       #    0.505 K/sec              
                 0      timer:itimer_expire       #    0.000 K/sec              
                 0      timer:itimer_state        #    0.000 K/sec              
                 0      timer:tick_stop           #    0.000 K/sec              
             2,931      timer:timer_cancel        #    0.000 K/sec              
             2,900      timer:timer_expire_entry  #    0.000 K/sec              
             2,900      timer:timer_expire_exit   #    0.000 K/sec              
                 1      timer:timer_init          #    0.000 K/sec              
             2,125      timer:timer_start         #    0.000 K/sec              
             1,068      tlb:tlb_flush             #    0.000 K/sec              

     506.670556587 seconds time elapsed

    7710.898503000 seconds user
     326.274682000 seconds sys
```

PS. Scheduler and memory optimized using this:


```
# MKL_DYNAMIC=FALSE OMP_NUM_THREADS=16 \
schedtool -n -10 -B -e \
env LD_PRELOAD=libhugetlbfs-2.20.so \
./xlinpack_amd64 settings > report.txt 2>&1
```

It gives me few more GFLOPS in results (395 -> 400). But it could be placebo. 

Some monitoring data from my run on 2950X:





Data was gathered using
`while sleep 0.2; do echo `date '+%s.%N'; grep '^cpu MHz' /proc/cpuinfo | cut -d : -f 2; cat /sys/class/hwmon/hwmon{0,1}/temp1_input; sed -e 's,/, ,' /proc/loadavg`; done`


----------



## _Flare (Feb 6, 2019)




----------



## Regeneration (Feb 6, 2019)

Linpack runs faster and more efficient on physical cores (not virtual ones). 8 in your case.


----------



## Mussels (Feb 7, 2019)

i think its more about how the numbers look backwards

Its an 8 core 16 thread, not a 16 core 8 thread


----------



## Wavetrex (Feb 7, 2019)

Which one to run ?
I ran the "Quick 2GB Benchmark" but I don't think it's ok, finishes way too fast and doesn't give time for the CPU to throttle if there's high heat/consumption.




(System in signature, i7-6800K)


This is indeed bugged in Windows for AMD Ryzen, it only uses half of the actual cores, 50% CPU utilization... cores 16 threads 8... lol.
And performance is way too low, loses badly to my Intel 6-core above.



Please fix


----------



## Regeneration (Feb 7, 2019)

Both run in 50% utilization. Overclocked 6800K is likely to be faster than stock Ryzen.


----------



## Wavetrex (Feb 7, 2019)

Regeneration said:


> Both run in 50% utilization. Overclocked 6800K is likely to be faster than stock Ryzen.


Ugh.
But why?

It should use all the logical threads available, like any other software, even if that means sharing cores via SMT.
LinX (with the old build) uses all 12 threads on my Intel and shoots utilization to 100% and temperatures to the center of the sun, like it should.


----------



## Regeneration (Feb 7, 2019)

Wavetrex said:


> Ugh.
> But why?
> 
> It should use all the logical threads available, like any other software, even if that means sharing cores via SMT.
> LinX (with the old build) uses all 12 threads on my Intel and shoots utilization to 100% and temperatures to the center of the sun, like it should.



Linpack Xtreme has two operating modes: benchmark and stress test.

Benchmark runs only on true cores to provide accurate performance rating (cross-platform, cross-vendor, etc).

Stress test runs on all threads to push your CPU to its limits.


----------



## Wavetrex (Feb 7, 2019)

Regeneration said:


> Linpack Xtreme has two operating modes: benchmark and stress test.
> 
> Benchmark runs only on true cores to provide accurate performance rating (cross-platform, cross-vendor, etc).
> 
> Stress test runs on all threads to push your CPU to its limits.


I see.
Well, it doesn't feel accurate at all, in all multi-threaded tasks that I've thrown to my two computers (video encoding, compiling, 3D rendering, data analysis), the 8-core Ryzen 7-1700 @ 3.6 Ghz beats the older Intel i7-6800K @ 4.2 Ghz soundly.
Yet, in Linpack my Intel CPU wins by a huge margin.

Looking at the results of others as well, this entire test seems extremely skewed towards Intel, so the entire "cross-vendor" thing feels like a big lie.


----------



## Mussels (Feb 7, 2019)

I think thats linpack in general, its never been AMD friendly


----------



## Regeneration (Feb 7, 2019)

Wavetrex said:


> I see.
> Well, it doesn't feel accurate at all, in all multi-threaded tasks that I've thrown to my two computers (video encoding, compiling, 3D rendering, data analysis), the 8-core Ryzen 7-1700 @ 3.6 Ghz beats the older Intel i7-6800K @ 4.2 Ghz soundly.
> Yet, in Linpack my Intel CPU wins by a huge margin.
> 
> Looking at the results of others as well, this entire test seems extremely skewed towards Intel, so the entire "cross-vendor" thing feels like a big lie.



First of all, you'll have to run the standard benchmark.

And second, unlike other apps, Linpack uses AVX instructions and a lot of RAM.

Memory bandwidth is a factor too.


----------



## Nutria (Jun 11, 2019)

FX-6100@3.0GHz


----------



## Arctucas (Sep 15, 2019)

v0.9 with new binaries.


----------



## er557 (Sep 16, 2019)

where did you get new binaries and linpack_xeon64? 
cheers


----------



## Arctucas (Oct 18, 2019)

er557 said:


> where did you get new binaries and linpack_xeon64?
> cheers



Intel website.


----------



## damric (Oct 30, 2019)

164


----------



## nico_80) (Feb 27, 2020)

Here's my ryzen 7 3700x.


----------



## Conar-XP (Mar 4, 2020)

i5 9600kf Stock.


----------



## jasko (Apr 26, 2020)

i5 4690K@4.3GHz with a strangely high score


----------



## LiquidTrance (Sep 18, 2020)

Highest frequency I could run my 9900K and still be stable in Linpack Xtreme 1.1.3.  The heat is too much for anything higher.  Could totally use some custom watercooling right about now.  Interested to see what others are getting on their 9900K on linpack xtreme version 1.1.3 as there is barely any data out there for that specific chip/linx version.  Please test/share when you can.


----------



## Arctucas (Sep 19, 2020)

~45°F air through radiator.


----------



## LiquidTrance (Sep 19, 2020)

Arctucas said:


> ~45°F air through radiator.
> 
> View attachment 169179


Thank you for sharing this mate.  Ok so it looks like the gflops range is about the same for 9900k frequencies as one of the older versions of linx, i think it was either 0.9.6 or 0.9.7., 9900K 5g was getting 550 gflops on one of those.  I should clock to 5g and hook up an air conditioner to see if I can get 5g to pass lol.  Can do up to 4.85g HT on and 4.9g HT off with regular air.  Tried 5g the other day with regular air and temps exceeded 100c quickly/got errors/mismatched residuals, didn't really wanna add any more volts after that to see if it was lack of voltage or heat because 195 amps was already getting pulled.


----------



## Arctucas (Sep 19, 2020)

LiquidTrance said:


> Thank you for sharing this mate.  Ok so it looks like the gflops range is about the same for 9900k frequencies as one of the older versions of linx, i think it was either 0.9.6 or 0.9.7., 9900K 5g was getting 550 gflops on one of those.  I should clock to 5g and hook up an air conditioner to see if I can get 5g to pass lol.  Can do up to 4.85g HT on and 4.9g HT off with regular air.  Tried 5g the other day with regular air and temps exceeded 100c quickly/got errors/mismatched residuals, didn't really wanna add any more volts after that to see if it was lack of voltage or heat because 195 amps was already getting pulled.



You did not mention what cooler you are using.

What VCore are you using? VCCSA and VCCIO also contribute a little heat. Your AIDA64 memory bench looks good. Is that 32GB of RAM?


----------



## LiquidTrance (Sep 19, 2020)

Arctucas said:


> You did not mention what cooler you are using.
> 
> What VCore are you using? VCCSA and VCCIO also contribute a little heat. Your AIDA64 memory bench looks good. Is that 32GB of RAM?


I'm on a 360mm aio, 4x8gb ram.  Not sure how to gauge vcore, i'm on a powersaving preset for acdc loadlines, normal vcore llc(least aggressive vcore LLC in bios), dynamic voltage mode with a +80mv offset.  Okay so I just got done completing this.  Same settings, different ram OC(latency is worse because motherboard sucks at training rtls/iols at 4000mhz and up).  Memory benchmark taken right after the test complete, read/write/copy can look better if I take it right after booting in windows(62/63/61).  Regular ambient.  Gflops all over the place, my guess is something went off in the background.  Based on how my other frequencies scored, I think 5g will give somewhere between 550-555 gflops, maybe 557ish at best.




Ok so I tried for 5ghz/4.6ghzCache again with ram at c15-4133 for the even 35.0ns latency.  Cache error with +80mv offset 5 seconds into first loop, then tried again and failed on first loop with +90mv offset.  Temps maxed at 106c.  vcore was something like 1.28v during the loop.  I think 4.9ghz hyperthreading turned on is my limit on regular ambient for linpack xtreme 1.1.3 with my cooling solution or unless I bust out an air conditioner.


----------



## Arctucas (Sep 20, 2020)

LiquidTrance said:


> I'm on a 360mm aio, 4x8gb ram.  Not sure how to gauge vcore, i'm on a powersaving preset for acdc loadlines, normal vcore llc(least aggressive vcore LLC in bios), dynamic voltage mode with a +80mv offset.  Okay so I just got done completing this.  Same settings, different ram OC(latency is worse because motherboard sucks at training rtls/iols at 4000mhz and up).  Memory benchmark taken right after the test complete, read/write/copy can look better if I take it right after booting in windows(62/63/61).  Regular ambient.  Gflops all over the place, my guess is something went off in the background.  Based on how my other frequencies scored, I think 5g will give somewhere between 550-555 gflops, maybe 557ish at best.
> View attachment 169194
> 
> Ok so I tried for 5ghz/4.6ghzCache again with ram at c15-4133 for the even 35.0ns latency.  Cache error with +80mv offset 5 seconds into first loop, then tried again and failed on first loop with +90mv offset.  Temps maxed at 106c.  vcore was something like 1.28v during the loop.  I think 4.9ghz hyperthreading turned on is my limit on regular ambient for linpack xtreme 1.1.3 with my cooling solution or unless I bust out an air conditioner.



Might I suggest running HWiNFO, as I did, while LinPack X is running?

Then the Voltages, Temperatures, etc. Min/Max can be seen.


----------



## LiquidTrance (Sep 20, 2020)

Arctucas said:


> Might I suggest running HWiNFO, as I did, while LinPack X is running?
> 
> Then the Voltages, Temperatures, etc. Min/Max can be seen.


I run hwinfo64 for a 3 loop intro just to see temps + min/max voltage.  at 4.9ghz, it was around 1.256v load voltage I think.  At 4.85ghz it was 1.21-1.22v load voltage, i think average was 1.21v load.  I don't run hwinfo during the 30 loop because I don't want polling to mess with gflops and/or residuals though i don't think it affects residuals, it might affect gflop consistency.  Temps during 4.85v are 90c-95c and temps during 4.9ghz were 95c-97c, maybe topping out at 100c.  

Btw, that's a sick 5.266ghz cinebench r20 score you got ^^


----------



## Arctucas (Sep 20, 2020)

LiquidTrance said:


> I run hwinfo64 for a 3 loop intro just to see temps + min/max voltage.  at 4.9ghz, it was around 1.256v load voltage I think.  At 4.85ghz it was 1.21-1.22v load voltage, i think average was 1.21v load.  I don't run hwinfo during the 30 loop because I don't want polling to mess with gflops and/or residuals though i don't think it affects residuals, it might affect gflop consistency.  Temps during 4.85v are 90c-95c and temps during 4.9ghz were 95c-97c, maybe topping out at 100c.
> 
> Btw, that's a sick 5.266ghz cinebench r20 score you got ^^



OK. Thought it might help identify where you could make some adjustment to lower temps.

Which CBR20 score was that? I have done several.


----------



## LiquidTrance (Sep 20, 2020)

Arctucas said:


> OK. Thought it might help identify where you could make some adjustment to lower temps.
> 
> Which CBR20 score was that? I have done several.


The score on the front page of the share your cinebench thread^^.  In regards to temps, I think the only thing I could do is lower frequency/volts because i'm already on the lowest form of llc.


----------



## Arctucas (Sep 20, 2020)

LiquidTrance said:


> The score on the front page of the share your cinebench thread^^.  In regards to temps, I think the only thing I could do is lower frequency/volts because i'm already on the lowest form of llc.





LiquidTrance said:


> The score on the front page of the share your cinebench thread^^.  In regards to temps, I think the only thing I could do is lower frequency/volts because i'm already on the lowest form of llc.



OK. Has not been updated for a while.

This was my best score. https://www.techpowerup.com/forums/threads/post-your-cinebench-r20-score.213237/post-4221302.

You have a 360 AIO, correct? That seems as if it might do a better job of cooling than what you seem to have. Perhaps there is an issue with the cooler?


----------



## LiquidTrance (Sep 20, 2020)

Arctucas said:


> OK. Has not been updated for a while.
> 
> This was my best score. https://www.techpowerup.com/forums/threads/post-your-cinebench-r20-score.213237/post-4221302.
> 
> You have a 360 AIO, correct? That seems as if it might do a better job of cooling than what you seem to have. Perhaps there is an issue with the cooler?


Cooler is working ok, pump/fans operational at the speeds/rpms i set them to.  I think the chip is a guzzler, wants 1.3v load voltage for 5g in occtlarge avx2 to be stable, about 1.35v for 5.1g.  A delid would probably help a bit with temps


----------



## Arctucas (Sep 20, 2020)

LiquidTrance said:


> Cooler is working ok, pump/fans operational at the speeds/rpms i set them to.  I think the chip is a guzzler, wants 1.3v load voltage for 5g in occtlarge avx2 to be stable, about 1.35v for 5.1g.  A delid would probably help a bit with temps



Yes, de-lid will help, direct die would be even better.


----------



## AlwaysHope (Sep 29, 2020)

OP, with v1.1.3 any chance of the 14GB RAM & up being out of "experimental" mode in future releases?
32GB systems are becoming more popular.


----------



## un_little_gender (Oct 11, 2020)

Dell Precision 5520
i7-7820hq
hynix ddr4 @ 2667 cl17


----------



## LiquidTrance (Oct 16, 2020)

Work in progress, 4x8gb@4242mhz cas17.  Quick preliminary 10 pass run with my 9000k at 4.85ghz all core and 4.4ghz cache.  Glops average increased by 3 over my 4x8gb@3933 cas15 configuration at the same core/cache speed.  1.45v/vdimm, 1.3v/sa-io


----------



## Arctucas (Oct 16, 2020)




----------



## LiquidTrance (Oct 17, 2020)

Arctucas said:


> View attachment 172049


Are you clocked at 5ghz with an air conditioner hooked up blowing cold air through your rad again?   How do those residuals look without unconventional cooling?  I don't think i've seen you post that yet.  Does custom watercooling alone without the air conditioning unit allow you to match all residuals at 5g?


----------



## Arctucas (Oct 17, 2020)

LiquidTrance said:


> Are you clocked at 5ghz with an air conditioner hooked up blowing cold air through your rad again?   How do those residuals look without unconventional cooling?  I don't think i've seen you post that yet.  Does custom watercooling alone without the air conditioning unit allow you to match all residuals at 5g?



Actually, this run was done without the A/C. Ambient room temp was ~24°C. Highest core was ~76°C.

5000MHz CPU clock.

Will post another run in the morning with much cooler temperatures...


----------



## LiquidTrance (Oct 17, 2020)

Arctucas said:


> Actually, this run was done without the A/C. Ambient room temp was ~24°C. Highest core was ~76°C.
> 
> 5000MHz CPU clock.
> 
> Will post another run in the morning with much cooler temperatures...


Solid.  Did you change clockspeeds on the ram, your residual average increased by 3 compared to the last run you posted.  Looking pretty good.


----------



## Arctucas (Oct 17, 2020)

Radiator in open window, outside air temperature ~37°F.

5300MHz.







LiquidTrance said:


> Solid.  Did you change clockspeeds on the ram, your residual average increased by 3 compared to the last run you posted.  Looking pretty good.



No, just tweaked timings.

Unfortunately, this kit will not overclock.


----------



## LiquidTrance (Oct 17, 2020)

Arctucas said:


> Radiator in open window, outside air temperature ~37°F.
> 
> 5300MHz.
> 
> ...


Le golden chip.  If only I had been so lucky ><.  I got a real shitter, can't do linpack above 4.9ghz ;/.  Thing has a concave IHS lol.  O wells.  guess i'll just have to live with it for the next 10 years.  Probably won't build a PC like this again when this one gets outdated, ya never know if you are gonna get sold a shitty chip or not which makes it really hard to invest into high end memory kits and custom watercooling.  Like, i'd be livid if I had spent all that money on custom watercooling to only *maybe* get 5g linpack stable with my cpu.  Thankfully I only spent 200 dollars on cooling.  How much did you spend total on custom watercooling to get 5% more gflops than me? around 500 USD for a waterblock/pumps/rads/tubing/fittings etc?  more than that?  Did you also spend money on a direct die bracket + delid tool?  Getting that extra 5% performance sounds REALLY costly, like more than the cost of the cpu itself.



Arctucas said:


> Radiator in open window, outside air temperature ~37°F.
> 
> 5300MHz.
> 
> ...



Whats the highest your chip will do in linpack *without* unconventional cooling aka putting your rad in the window? lol. Just regular good ol rad inside your case without an open window or house a/c on.


----------



## Arctucas (Oct 17, 2020)

LiquidTrance said:


> Le golden chip.  If only I had been so lucky ><.  I got a real shitter, can't do linpack above 4.9ghz ;/.  O wells.  guess i'll just have to live with it for the next 10 years.  Probably won't build a PC like this again when this one gets outdated, ya never know if you are gonna get sold a shitty chip or not which makes it really hard to invest into high end memory kits and custom watercooling.  Like, i'd be livid if I had spent all that money on custom watercooling to only *maybe* get 5g linpack stable.  Thankfully I only spent 200 dollars on cooling.  How much did you spend total on custom watercooling to get 5% more gflops than me? around 500 USD for a waterblock/pumps/rads/tubing/fittings etc?



If only.

Black Ice Nemesis GTR560 = $200
Noctua NF-A14-iPPC2000 @$22.50 x 8 = $180
Heatkiller IV Pro Nickel w/backplate= $91
RockitCool direct die kit = $30
XSPC D5T Vario @$77 x 2 =$154
XSPC dual D5 dual bay reservoir =$90
Koolance flowmeter and adapter = $43
Tygon A-60-G = $120 (bought a 50ft. box)
Koolance QDH4 female disconnect @$16.50 x 2 = $33
Koolance QDH4 male disconnect @$13.50 x 2 = $27
Miscellaneous wiring, connectors, terminals, sleeving, clamps, fittings, screws = $75 (guesstimate)
Over $1000 USD.

None of this includes taxes or freight charges.



LiquidTrance said:


> Le golden chip.  If only I had been so lucky ><.  I got a real shitter, can't do linpack above 4.9ghz ;/.  O wells.  guess i'll just have to live with it for the next 10 years.  Probably won't build a PC like this again when this one gets outdated, ya never know if you are gonna get sold a shitty chip or not which makes it really hard to invest into high end memory kits and custom watercooling.  Like, i'd be livid if I had spent all that money on custom watercooling to only *maybe* get 5g linpack stable with my cpu.  Thankfully I only spent 200 dollars on cooling.  How much did you spend total on custom watercooling to get 5% more gflops than me? around 500 USD for a waterblock/pumps/rads/tubing/fittings etc?  more than that?  Did you also spend money on a direct die bracket + delid tool?  Getting that extra 5% performance sounds REALLY costly, like more than the cost of the cpu itself.
> 
> 
> 
> Whats the highest your chip will do in linpack *without* unconventional cooling aka putting your rad in the window? lol. Just regular good ol rad inside your case without an open window or house a/c on.



5300+ if I add bclk, with ambients ~25°C will get CPU package temperature in mid-90°C.

Radiator is external.


----------



## LiquidTrance (Oct 17, 2020)

Arctucas said:


> If only.
> 
> Black Ice Nemesis GTR560 = $200
> Noctua NF-A14-iPPC2000 @$22.50 x 8 = $180
> ...


wow, just wow.  Ok, now I don't feel as bad about my hardware set up or my investment, though still a little salty tbh since I only got an extra 150mhz over stock on a flagship k sku chip.  But wow over $1000 USD more spent than me for only 6% more performance on a $500 dollar chip(assuming shipping/freight was around 200 for all that stuff).  Nice set up though, i'm sure it is very pretty.  Holy cow, I still can't believe it.  literally in shock.  I didn't even know it was possible to spend that much on just cooling, i thought it maxed out around like 500 or 600 dollars tops.  mind=blown


----------



## Arctucas (Oct 17, 2020)

LiquidTrance said:


> wow, just wow.  Ok, now I don't feel bad AT ALL about my hardware set up or my investment.  I feel like I got a deal compared to what you spent.  Over $1000 USD more spent for 6% more performance on a $500 dollar chip.  Nice set up though, i'm sure it is very pretty.  Holy cow, I still can't believe it.  literally in shock.



Heh, many people spend far more. Seen guys with multiple radiators, pumps, gpu blocks, probably 2-3 times what I spent.

Then there are those with more exotic cooling; chillers, phase change, etc.

In today's enthusiast market, $1000 is not all that much.


----------



## LiquidTrance (Oct 17, 2020)

Arctucas said:


> Heh, many people spend far more. Seen guys with multiple radiators, pumps, gpu blocks, probably 2-3 times what I spent.
> 
> Then there are those with more exotic cooling; chillers, phase change, etc.
> 
> In today's enthusiast market, $1000 is not all that much.



Now I understand why AMD made it big with ryzen/zen architecture, you don't even need to spend extra on custom watercooling to get all the performance out of the chip, it's already maxed out. 1k is still 1k, and it's still twice the cost of the cpu and five times the cost of an average 200 dollar 360mm aio for only 6% more performance.  anywho. Nice flex, I guess.


----------



## Machinus (Apr 11, 2021)

5950x at 4200MHz, fclk at 1900
SMT disabled


----------



## motleyguts (May 9, 2021)

5800X, RAM's not dialed in yet so maybe it'll hit 350ish


----------



## registertotypetostrangers (Aug 13, 2021)

10850k stock clocks, power limit removed.  Gskill F4-3600C18-32 (Hynix MJR) @ 4000 18-22-22-42



Thanks for this Awesome tool btw.  Easy to see what memory settings increase/decrease performance.


----------

