• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

## [Golden Sample] RTX 5080 – 3300 MHz @ 1.020 V (Stock Curve) – Ultra-Stable & Efficient

What power do you read in furmark or kombustor's vulkan based furmark?
 
Since cpu is a bit old, you may want to test gpu with 200% scaling from game settings like in elex 2, dota 2, aliens fireteam elite, gothic 1 remake.
Thanks for the suggestion!


You're absolutely right — with an older CPU like the i9-10900, I’m definitely planning to do further testing with various scaling options to better isolate GPU performance. I haven’t tested Aliens: Fireteam Elite yet, but it’s on my short list for upcoming stability and scaling validation.


So far, Cyberpunk 2077 (Path Tracing + Psycho + DLSS Quality) has been my main in-game stability test — and I’ve had solid results with 60+ minutes of continuous gameplay at 3262 MHz @ 1.020 V, fully stable. That said, I’m looking forward to exploring more titles and scenarios to fully understand this card's potential.


Thanks again for the input!

What power do you read in furmark or kombustor's vulkan based furmark?
Good question! I haven't run Furmark or Vulkan-based Kombustor yet, since my focus has mostly been on real-world and ray tracing-heavy loads like Port Royal, Cyberpunk 2077, and Unigine Superposition.


However, under my current 3262 MHz @ 1.020 V profile, GPU power draw stays around 280 W during Cyberpunk path tracing — with fan speeds under 1200 RPM and temps around 58–60°C, even after long sessions.


I’ll definitely add Kombustor to the list for additional thermal/power stress testing and will share results once I’ve got them logged.

Thanks for the suggestion!


You're absolutely right — with an older CPU like the i9-10900, I’m definitely planning to do further testing with various scaling options to better isolate GPU performance. I haven’t tested Aliens: Fireteam Elite yet, but it’s on my short list for upcoming stability and scaling validation.


So far, Cyberpunk 2077 (Path Tracing + Psycho + DLSS Quality) has been my main in-game stability test — and I’ve had solid results with 60+ minutes of continuous gameplay at 3262 MHz @ 1.020 V, fully stable. That said, I’m looking forward to exploring more titles and scenarios to fully understand this card's potential.


Thanks again for the input!


Good question! I haven't run Furmark or Vulkan-based Kombustor yet, since my focus has mostly been on real-world and ray tracing-heavy loads like Port Royal, Cyberpunk 2077, and Unigine Superposition.


However, under my current 3262 MHz @ 1.020 V profile, GPU power draw stays around 280 W during Cyberpunk path tracing — with fan speeds under 1200 RPM and temps around 58–60°C, even after long sessions.


I’ll definitely add Kombustor to the list for additional thermal/power stress testing and will share results once I’ve got them logged.
1+ hour in cyberpunk
 
280Watts in cyberpunk for 5080 is very good. My 5070 was like 175Watts -225Watts 3225MHz max boost, depending on environment.

You can undervolt to 200 watts and lose only 10% performance maybe. Laptop-grade efficiency.
 
280Watts in cyberpunk for 5080 is very good. My 5070 was like 175Watts -225Watts depending on environment.
Thanks! Yeah, I was pretty surprised myself — especially considering it's running at 3262 MHz @ 1.020 V on stock air with just ~1100 RPM fan speed and staying cool around 58–60°C.


It's honestly insane how efficient this 5080 turned out to be, especially with no curve mods or undervolting tweaks applied yet. Still got more testing to do, but so far it’s holding up beautifully.
 
How is it in day to day use, IE hardware accelerated software? It's a totally different architechture so it's apples to oranges but my 6950xt was game (heavy games, several hours) and benchmark stable but after two hours in paint.net it crashed.
 
My 5070 at 3225MHz in cyberpunk lowers boost to 3150MHz in aliens and 3GHz in furmark. But cyberpunk uses tensor cores so it is a must to know limits of tensor. Memory of 50 series is also conservatively tuned in factory.

Your cards cooler is good so it is cold and boosts higher too.

I think your gpu can undervolt like 2900MHz at 0.8V (150-180Watts?) My 5070 requires 0.88V for 2920MHz.
 
How is it in day to day use, IE hardware accelerated software? It's a totally different architechture so it's apples to oranges but my 6950xt was game (heavy games, several hours) and benchmark stable but after two hours in paint.net it crashed.
That's a fair point! So far it's been rock solid during several long and intense gaming sessions (Cyberpunk with Path Tracing, etc.) and demanding stress benchmarks. I haven't had any issues during regular day-to-day use either, but I haven't specifically stress-tested in lighter workloads like idle desktop time for hours on end. Definitely something to keep an eye on going forward.


Also worth noting: the card doesn’t constantly sit at max clocks – it naturally fluctuates based on load and thermal headroom, which probably helps overall stability.

My 5070 at 3225MHz in cyberpunk lowers boost to 3150MHz in aliens and 3GHz in furmark. But cyberpunk uses tensor cores so it is a must to know limits of tensor. Memory of 50 series is also conservatively tuned in factory.

Your cards cooler is good so it is cold and boosts higher too.

I think your gpu can undervolt like 2900MHz at 0.8V (150-180Watts?) My 5070 requires 0.88V for 2920MHz.
That's an interesting point, and I appreciate you bringing it up!


So far, my RTX 5080 has proven stable not only through extended benchmark sessions like Time Spy Extreme, Port Royal and Superposition. I haven’t encountered any crashes in general desktop use either.


That said, I agree it’s important to test across different usage types. While heavy gaming and benchmarking put consistent stress on the GPU, more irregular workloads (like UI acceleration or idle background tasks) might hit other edge cases — and I’m definitely planning to do more long-duration idle or mixed-use monitoring to be fully sure.


Also, the GPU doesn’t boost constantly to max clocks — it adapts depending on load, so even during "daily use", it’s not always pushing the absolute limit. But so far, it’s been nothing but smooth.

My 5070 at 3225MHz in cyberpunk lowers boost to 3150MHz in aliens and 3GHz in furmark. But cyberpunk uses tensor cores so it is a must to know limits of tensor. Memory of 50 series is also conservatively tuned in factory.

Your cards cooler is good so it is cold and boosts higher too.

I think your gpu can undervolt like 2900MHz at 0.8V (150-180Watts?) My 5070 requires 0.88V for 2920MHz.
Thanks for the detailed insight — that’s some really useful context!


Yeah, the tensor core load in Cyberpunk with Path Tracing definitely plays a unique role in stressing the GPU differently compared to other engines. I’ve also noticed that Cyberpunk tends to push closer to the limits than many traditional rasterized titles.


Regarding undervolting: I haven’t tried it on this card yet, but based on the thermals and power efficiency I’m seeing (around 280W at 3260–3280 MHz), I might give it a shot down the line. If it can hold close to 3200 MHz at something like 0.9V or even lower, that would be wild. Your 5070 undervolt is impressive — 2920 MHz at 0.88V is really solid.


And yeah, memory tuning on the 50 series seems like a big area of headroom — something else to explore once I finalize the core clock profile.

Here is a complete list from HWinfo: when i was running Cyberpunk and it's pretty clear that my PCIe 3 is a complete bottleneck but it is a beast either way :)
 

Attachments

  • Screenshot 2025-04-08 232706.png
    Screenshot 2025-04-08 232706.png
    144.6 KB · Views: 24
  • Screenshot 2025-04-08 232719.png
    Screenshot 2025-04-08 232719.png
    130.2 KB · Views: 23
  • Screenshot 2025-04-08 232737.png
    Screenshot 2025-04-08 232737.png
    77.3 KB · Views: 24
RAM can also be a limiting factor depending on how much VRAM the card has, normally Windows will reserve equal or just over the VRAM (or 50% of system).
When the VRAM runs out it goes into system, although with the right card that should not happen (probably not anyway).

I have 64 GB RAM, and Windows will reserve approximately 50% of that value for the GPU.

1744201371722.png


System RAM is not GDDR, so its not ideal to use it.

----

Tip, for self setting an AMD CPU, increase the bus speed until the memory rate falls, then one step back.
After that do volts and clocks, I turn performance bias to off, and some other features off.

5800X.png
 
Last edited:
RAM can also be a limiting factor depending on how much VRAM the card has, normally Windows will reserve equal or just over the VRAM (or 50% of system).
When the VRAM runs out it goes into system, although with the right card that should not happen (probably not anyway).

I have 64 GB RAM, and Windows will reserve approximately 50% of that value for the GPU.

View attachment 394235

System RAM is not GDDR, so its not ideal to use it.

----

Tip, for self setting an AMD CPU, increase the bus speed until the memory rate falls, then one step back.
After that do volts and clocks, I turn performance bias to off, and some other features off.





That's a great point!


I'm actually running 32 GB of system RAM myself, and so far I haven't hit any serious system RAM fallback in even the most VRAM-heavy games (Cyberpunk 2077, Dying Light 2, etc.), but I agree it's something to keep an eye on — especially for users with 16 GB or less.


I’ve noticed that Windows does allocate a good chunk of shared memory, but the RTX 5080 I’m running has 16 GB GDDR7, and that’s been holding up very well – even at 1440p with full ray tracing + path tracing enabled.


Definitely agree though: once VRAM fills, relying on system RAM (especially slower DDR4) can really throttle performance hard.
 
Winkey+G when playing a game if you have no onscreen counter running.
 
Winkey+G when playing a game if you have no onscreen counter running.




I'm using NVIDIA overlay so i'm gonna share some more screenshots and i'm always using a counter so :)

Here is a screenshot for Timespy
 

Attachments

  • 489435409_1285650939202550_5881840269151785665_n.jpg
    489435409_1285650939202550_5881840269151785665_n.jpg
    142.3 KB · Views: 26
I don't see VRAM in the image but none the less, 16 GB is a good area to be. Some games I play will hit 50-60% (maybe more) VRAM used, so 10+ GB.
 
I don't see VRAM in the image but none the less, 16 GB is a good area to be. Some games I play will hit 50-60% (maybe more) VRAM used, so 10+ GB.
Here is a complete result of logging in Time Spy:
 

Attachments

  • Screenshot 2025-04-09 155738.png
    Screenshot 2025-04-09 155738.png
    141.1 KB · Views: 15
  • Screenshot 2025-04-09 155756.png
    Screenshot 2025-04-09 155756.png
    116.7 KB · Views: 15
  • Screenshot 2025-04-09 155804.png
    Screenshot 2025-04-09 155804.png
    139.2 KB · Views: 14
@Siljan (OC) are you using chatgpt to translate or something because you sound exactly like chatgpt lmao
Right on the point! Great question again. Let me know if youd like more detail.
 
I Won't lie to you, but i'm using it to understand and learning, i hope it don't make you angry or something

@Siljan (OC) are you using chatgpt to translate or something because you sound exactly like chatgpt lmao


I'm sorry but i do to understand the system better and to learn

"Yes, I’ve had some help from ChatGPT – but everything I share is real, based on my own testing, results, and experience. I use the AI to help summarize logs, explain technical data, and double-check benchmarks. But the card, the testing, the tuning, and the stability? That’s 100% me. The AI just helps me stay sharp and document things better.

@Siljan (OC) are you using chatgpt to translate or something because you sound exactly like chatgpt lmao
"Yes, I’ve had some help from ChatGPT – but everything I share is real, based on my own testing, results, and experience. I use the AI to help summarize logs, explain technical data, and double-check benchmarks. But the card, the testing, the tuning, and the stability? That’s 100% me. The AI just helps me stay sharp and document things better.

I don't see VRAM in the image but none the less, 16 GB is a good area to be. Some games I play will hit 50-60% (maybe more) VRAM used, so 10+ GB.


Here is a report for Dying Light 2

Stable all the way but the CPU is a bottleneck so i could have achieved higher FPS

It's getting better and better and i just tweaked the fan speed just to 60 % and now it's way more stable than it was, i need more testing, to be continued..................
Winkey+G when playing a game if you have no onscreen counter running.
 

Attachments

  • 489324188_3921781981483811_8949639748774371061_n.jpg
    489324188_3921781981483811_8949639748774371061_n.jpg
    173 KB · Views: 21
  • 488408965_1440856460229668_426618664238026073_n.jpg
    488408965_1440856460229668_426618664238026073_n.jpg
    110.6 KB · Views: 21
  • Screenshot 2025-04-09 192714.png
    Screenshot 2025-04-09 192714.png
    157.8 KB · Views: 20
  • Screenshot 2025-04-09 192750.png
    Screenshot 2025-04-09 192750.png
    126.8 KB · Views: 17
  • Screenshot 2025-04-09 192729.png
    Screenshot 2025-04-09 192729.png
    132.4 KB · Views: 16
  • Screenshot 2025-04-09 203017.png
    Screenshot 2025-04-09 203017.png
    652.1 KB · Views: 19
  • Screenshot 2025-04-09 205126.png
    Screenshot 2025-04-09 205126.png
    571.7 KB · Views: 21
  • Screenshot 2025-04-09 205749.png
    Screenshot 2025-04-09 205749.png
    659.6 KB · Views: 19
  • Screenshot 2025-04-09 210935.png
    Screenshot 2025-04-09 210935.png
    537.2 KB · Views: 19
  • Screenshot 2025-04-09 211909.png
    Screenshot 2025-04-09 211909.png
    477.9 KB · Views: 29
That said, I’ve focused this thread mainly on silicon quality and clock stability of the GPU itself,
The problem is testing with old platform and new GPU with 3D benchmarks. I propose to you and GPT the following.

Try using GPUPI benchmark. This is not a visual render, but a calculative benchmark. A different type of stress really. It will also help test stability of the VRAM overclock. Utilize 32b set. Goal of faster than 1m 40s for RTX 5080 3.2ghz or faster. :)
 
In other words, not stable.

But i'm verry close :)

The problem is testing with old platform and new GPU with 3D benchmarks. I propose to you and GPT the following.

Try using GPUPI benchmark. This is not a visual render, but a calculative benchmark. A different type of stress really. It will also help test stability of the VRAM overclock. Utilize 32b set. Goal of faster than 1m 40s for RTX 5080 3.2ghz or faster. :)
Thanks for the tip, ShrimpBrime!


You're right that my platform is holding back some 3D benchmark scores, especially on the CPU side — I'm currently on i9-10900, but that's changing soon


That said, I’ve mainly focused on pushing the silicon quality and clock scaling of the RTX 5080 GPU itself, and verifying stability at extreme MHz under ray tracing loads.


I’ll definitely give GPUPI a shot I’ll aim for under 1m40s as you suggested with the 32b workload.


Appreciate the feedback — will post results once I’ve run a few rounds :)
 
In other words, not stable.
Test result shows stability of GPU CLOCK (MHz) [or how much it fluctuates from max. value, based on changes in load], and not if card actually stable or not (ie. how likely card is to crash/hang system/etc.).
You will not be able to get 100% without undervolting and underclocking (or simply using really old card without GPU Boost technology).

As for games, I wonder how much this "low power usage under game load" is connected to CPU simply not being able to keep up with your GPU. Just FYI : Having "99%" usage on GPU in Afterburner doesn't mean you actually are using card in 99%. Simple example of Furmark vs. game can prove this (both can use ~99% of the GPU, but power consumption on both is very different). Less CPU performance can also negatively impact stability testing (since GPU can't be fully utilised, it will not crash on your CPU but it would with OC'ed 9800X3D for example).
 
Last edited:
Test result shows stability of GPU CLOCK (MHz) [or how much it fluctuates from max. value, based on changes in load], and not if card actually stable or not (ie. how likely card is to crash/hang system/etc.).
You will not be able to get 100% without undervolting and underclocking (or simply using really old card without GPU Boost technology).

As for games, I wonder how much this "low power usage under game load" is connected to CPU simply not being able to keep up with your GPU. Just FYI : Having "99%" usage on GPU in Afterburner doesn't mean you actually are using card in 99%. Simple example of Furmark vs. game can prove this (both can use ~99% of the GPU, but power consumption on both is very different). Less CPU performance can also negatively impact stability testing (since GPU can't be fully utilised, it will not crash on your CPU but it would with OC'ed 9800X3D for example).
Thanks for the clarification – and yes, you're right that a steady GPU clock doesn’t automatically equal full stability. What I’m doing now is exploring the silicon quality and voltage efficiency of my RTX 5080 sample through synthetic workloads like Time Spy Extreme and Speed Way, using HWiNFO to monitor behavior in detail.


So far, I’ve been able to hold around 3300–3320 MHz at ~1.020 V under full load with low temps and good fan tuning . My long-term goal is to reach and maintain a stable 3300 MHz clock in real-world gaming as well – which is difficult, but I’m getting very close.


Once I move to a newer platform and CPU, I expect better overall utilization and fewer limitations on the GPU side. I also plan to test with GPUPI soon, as suggested, to verify memory and core stability in a different way.


Appreciate the feedback – this kind of input is gold when chasing high efficiency profiles.

The problem is testing with old platform and new GPU with 3D benchmarks. I propose to you and GPT the following.

Try using GPUPI benchmark. This is not a visual render, but a calculative benchmark. A different type of stress really. It will also help test stability of the VRAM overclock. Utilize 32b set. Goal of faster than 1m 40s for RTX 5080 3.2ghz or faster. :)




Time used: 1 minute og 28,307 seconds
 

Attachments

  • Screenshot 2025-04-10 114901.png
    Screenshot 2025-04-10 114901.png
    760.6 KB · Views: 17
  • Screenshot 2025-04-10 114920.png
    Screenshot 2025-04-10 114920.png
    734.3 KB · Views: 16
  • Screenshot 2025-04-10 114947.png
    Screenshot 2025-04-10 114947.png
    771.9 KB · Views: 17
Back
Top