Thursday, March 28th 2024

Developers of Outpost Infinity Siege Recommend Underclocking i9-13900K and i9-14900K for Stability on Machines with RTX 4090

Outpost: Infinity Siege developers recommend underclocking Intel's current and previous flagship desktop processors, the Core i9-14900K and i9-13900K, to prevent the game from crashing. This recommendation goes out to those with a GeForce RTX 4090 paired with either a Core i9-13900K or i9-14900K, we're fairly sure that the recommendation even extends to those with i9-14900KS and i9-13900KS. Team Ranger, the developers of the game, just released their second patch in just a week following the game's launch. In the patch notes, they ask users to use Intel Extreme Tuning Utility (XTU), to lower the P-core clock speeds down to at least 5.00 GHz (maximum boost). This development closely follows a February 2024 report which says that game stability issues of high-end "Raptor Lake" processors are linked to power limit unlocks.
Source: Tom's Hardware
Add your own comment

85 Comments on Developers of Outpost Infinity Siege Recommend Underclocking i9-13900K and i9-14900K for Stability on Machines with RTX 4090

#1
Toothless
Tech, Games, and TPU!
Poor coding being dumped on players.
Posted on Reply
#3
wNotyarD
"We're sorry, but your PC is too fast to run this game."
Posted on Reply
#4
user556
Guys, it's a power draw problem, not buggy code. The CPU core supply rail is dipping during peak load, AKA a brownout. Remember, we're talking about hundreds of amps of current flowing inside the CPU! That's not easy to design for, but simply put, Intel have screwed up and may need to apply a microcode fix.
Of note, it's not the first time Intel CPU's have had a peak loading issue. Intel explicitly down-clocked to avoid this with early AVX512 implementations.
Posted on Reply
#5
Psinet
7800X3D and 7900 XTX = no such problems, ever.

NFI why anyone would want to go with Intel or Nvidia with their massive power draw and associated heat.
Posted on Reply
#6
JohH
It uses Unreal Engine 5 and the developers of Unreal Engine 5 themselves (the RAD/Epic Games Tools) noted problems with recent Intel CPUs:
www.radgametools.com/oodleintel.htm

Blaming it on the devs is funny. Intel responded and advised Intel owners to avoid the problem by changing some BIOS settings.
Posted on Reply
#8
Dr_b_
Psinet7800X3D and 7900 XTX = no such problems, ever.

NFI why anyone would want to go with Intel or Nvidia with their massive power draw and associated heat.
don't AMD GPUs this generation use more power than equivalent nvidia 4000 GPUs?
Agree with you on the 7800X3D though, its total BS to have to do this to your CPU to keep it from using so much power that it breaks stuff
Posted on Reply
#9
user556
JohHIt uses Unreal Engine 5 and the developers of Unreal Engine 5 themselves (the RAD/Epic Games Tools) noted problems with recent Intel CPUs:
www.radgametools.com/oodleintel.htm

Blaming it on the devs is funny. Intel responded and advised Intel owners to avoid the problem by changing some BIOS settings.
Thanks for link. Good read.
Amusing that warranty replaced CPUs have proven an effective fix.
Posted on Reply
#10
Bwaze
Turbo button

Solution!



"With the introduction of CPUs which ran faster than the original 4.77 MHz Intel 8088 used in the IBM Personal Computer, programs which relied on the CPU's frequency for timing were executing faster than intended. Games in particular were often rendered unplayable, due to the reduced time allowed to react to the faster game events. To restore compatibility, the "turbo" button was added. Disengaging turbo mode slows the system down to a state compatible with original 8086/8088 chips. "
Posted on Reply
#11
sephiroth117
Psinet7800X3D and 7900 XTX = no such problems, ever.

NFI why anyone would want to go with Intel or Nvidia with their massive power draw and associated heat.
1000% agree for intel but do you think a 4080 or a 4090 is less efficient and hotter than a 7900xtx ;) ?

AMD made a very efficient design but Ada lovelace is really efficient too ! Nothing compared to ryzen vs intel erriciency

regarding your question ray tracing and dlss can also be reasons to choose nvidia
Posted on Reply
#12
BoggledBeagle
Recommendations of running Intel CPUs at 5 GHz are no surprise for me, I stated multiple times that these CPUs are not up to the insane speeds Intel pushes them to. I am running my 14900K at 5.2 GHz and I always felt adventurous for doing so.

I was just a little surprised that they recommend power limit of just 125W in the Oodle document. I got a feeling that 160W is just fine and comfortable power draw for these CPU, but apparently feelings may sometimes not be a reliable source of information.
Posted on Reply
#13
Ferrum Master
BoggledBeaglethese CPUs are not up to the insane speeds
Who are you that you can give such a description to silicon ratings? So 60 GHz WiFi RF IC silicon is like ethereal being to you? We are actually taking too slow upping the core frequency due to material problems and leakage, but that's what you get when having almost monopoly for silicon making instruments.

If you actually read the document, from Oodle, it suggests various options, either load line calibration or increasing voltage... witch actually moves the ball into Motherboard maker territory for taking blame for not properly testing their boards.
Posted on Reply
#14
user556
BoggledBeagleRecommendations of running Intel CPUs at 5 GHz are no surprice for me, I stated multiple times that these CPUs are not up to the insane speeds Intel pushes them to. I am running my 14900K at 5.2 GHz and I always felt adventurous for doing so.

I was just a little surpriced that they recommend power limit of just 125W in the Oodle document. I got a feeling that 160W is just fine and comfortable power draw for these CPU, but apparently feelings may sometimes not be a reliable source of information.
Hmm, those aren't high wattages. I presume the CPU actually excurts well beyond that setting.
Ferrum MasterIf you actually read the document, from Oodle, it suggests various options, either load line calibration or increasing voltage... witch actually moves the ball into Motherboard maker territory for taking blame for not properly testing their boards.
That same document says swapping the CPU has worked too. There's a lot of guessing going on.
Posted on Reply
#15
Crackong
Is this the very first game actually tells the player to downclock their CPU?
Posted on Reply
#17
Ferrum Master
user556That same document says swapping the CPU has worked too. There's a lot of guessing going on.
There are way too much speculation and clickbaiting. CPUs differ, that's just overclocking. The new one handles OC better.

It all pretty simple. If the board board has enabled power limits as it should, with higher voltages as the CPU needs it would be stable and would cap out at the set power-limit. Everything works if the the board is decent and LLC calculted properly and it looks it is not.

So you disable power limit, the CPU Craps out as actually the power/voltage table at the end ain't right at the top end actually as bloody nobody tested it. We don't have like Kingpin was at evga to test crap out of each board, so here is the result. What changed? The realm of such TDP CPU was previously used with LN2 thus with needed attention to details at such TDPs. They expect upping power limit is enough, compile and deploy. But it ain't. As you do OC you need voltage + the most important bit - the vdroop each board has depending how shitty it is at each temperature stepping as it is spitting heat, including in PCB traces. The more current flows the higher the differences. AMD doesn't get the flak just because they aren't eating that much power, if they would, the problems would be the same.

Why motherboard makers doesn't do right? Because of reviews. If the board would throttle faster it would look bad in reviews. So the the default settings for motherboards are a hazard for years on both platforms and ASUS especially excels in this department. A silly competition causes them to do stupid. Like the marketing depratment got control over the engineering team and they have no choice.
Posted on Reply
#18
user556
Motherboard makers don't just make up whatever they feel like. Intel provides a shit load of technical requirements for them to follow. Given there not being any one board at fault, it's not hard to conclude the blame goes at Intel's feet.
Maybe it's a weak batch of CPUs like a poorly attach heat spreader, maybe Intel has pushed out the specs too far, or maybe there is a design flaw with the power management in general. These all fall back on Intel.
Posted on Reply
#19
Ferrum Master
user556Motherboard makers don't just make up whatever they feel like. Intel provides a shit load of technical requirements for them to follow. Given there not being any one board at fault. It's not hard to conclude the blame goes at Intel's feet.
Maybe it's a weak batch of CPUs like a poorly attach heat spreader, maybe Intel has pushed out the specs too far, or maybe there is a design flaw with the power management in general. These all fall back on Intel.
LoL... they do, they act like students often and the boards are designed by amateurs and operate like in potato chip factories.

Listen... I work for RMA for decades. For example Apple, Samsung has capture programs, to send back faulty units sold in first weeks after launch for RD team to research. PC OEM makers... I can't remember such thing, they have the policy but is never used, they don't care.

We, PC users users are treated like peasants, we are charged over 500$ for over glorified piece of textolite(with RGB) and yet they manage to screw it up. We are at fault actually, we seldom open tickets, we seldom nag their support and punish them using consumer rights delivering inferior products. It is such a can of worms, last year we had AMD CPU burning scandal where the moral was the same, poor execution and testing, here? The same, just Intel in the main role.

Haven't you asked yourself why nvidia has such a tight grip on their designs and strictly regulates what to do? Well despite operating past 500W, they at least don't have socket and have tighter layout. Because those OEMs are highly interested in margins and making unpopular, often product damaging choices is possible. Most motherboard makers have small experience making 300-400W devices(their own design) that does consume it constantly, their server products don't share consumer designs actually. When doing LN2 those were only peaks, and PCB was cooled also. Server boards, let's look at them... their power stages have fins... not RGB covers... those are designed for airflow, layer count, copper layer thickness and we? We slap an AIO and starve that place from any air at all, leaving the last changing component room temp aside. These 300+W CPUs just are different ballpark and asks for different approach, you cannot cheap out. Like I said, if the TDP would be lower, then such problems won't happen. Physics.

It is so simple also here. Capture the defective device, attach high speed voltage probes with logging(I hope you understand that in built software readings are mostly suggestive), execute the code that causes trouble. Then look at graphs. Then put everything into oven and look again, calculate margins and deploy a real fix, if you can't, a new PCB revision comes and a recall should be issued, but as they are cheap, they never do, they sell defective things with problems only techs know. If you are an average user and not using top end parts or OC, you won't notice okay, but still it ain't a fair practice.
Posted on Reply
#20
Bwaze
I think the sole problem lies in the fact that these CPUs are "overclocked" as high as they could go - there isn't any performance headroom left in them, not even with custom water and agreeing to absurd amount of power. It's actually impressive they can do that with thousands sold. And there's bound to be some that aren't up to par, but it only shows in certain conditions. Just like EVGA Nvidia RTX 3090 cards that burned up by the high FPS in a menu screen of one game.

I wonder how many people have problems with these cutting edge products that aren't easily reproducible and fail more uniquely - even with these kind of failures that are more widespread users usually can't simply exchange or return their faulty product.
Posted on Reply
#21
azrael
Actually, both Intel and motherboard manufacturers are at fault here, in my opinion. Intel for making it possible to remove the power limits completely (yet warning against it), and motherboard manufacturers for disregarding that warning. Have to make your product look as good as possible ...even if only for a brief time before it's failing. Then the blame game begins and the customer ends between a rock and a very hard place. Just my two cents.
Posted on Reply
#22
_Under2World_
I never done any undervolting in my life but I can’t see in which world it would increase the stability of the system. If anything it should worsen it?
Posted on Reply
#23
qcmadness
Ferrum MasterLoL... they do, they act like students often and the boards are designed by amateurs and operate like in potato chip factories.

Listen... I work for RMA for decades. For example Apple, Samsung has capture programs, to send back faulty units sold in first weeks after launch for RD team to research. PC OEM makers... I can't remember such thing, they have the policy but is never used, they don't care.

We, PC users users are treated like peasants, we are charged over 500$ for over glorified piece of textolite(with RGB) and yet they manage to screw it up. We are at fault actually, we seldom open tickets, we seldom nag their support and punish them using consumer rights delivering inferior products. It is such a can of worms, last year we had AMD CPU burning scandal where the moral was the same, poor execution and testing, here? The same, just Intel in the main role.

Haven't you asked yourself why nvidia has such a tight grip on their designs and strictly regulates what to do? Well despite operating past 500W, they at least don't have socket and have tighter layout. Because those OEMs are highly interested in margins and making unpopular, often product damaging choices is possible. Most motherboard makers have small experience making 300-400W devices(their own design) that does consume it constantly, their server products don't share consumer designs actually. When doing LN2 those were only peaks, and PCB was cooled also. Server boards, let's look at them... their power stages have fins... not RGB covers... those are designed for airflow, layer count, copper layer thickness and we? We slap and AIO and starve that place from any air at all, leaving the last changing component room temp aside. These 300+W CPUs just are different ballpark and asks for different approach, you cannot cheap out. Like I said, if the TDP would be lower, then such problems won't happen. Physics.

It is so simple also here. Capture the defective device, attach high speed voltage probes with logging(I hope you understand that in built software readings are mostly suggestive), execute the code that causes trouble. Then look at graphs. Then put everything into oven and look again, calculate margins and deploy a real fix, if you can't, a new PCB revision comes and a recall should be issued, but as they are cheap, they never do, they sell defective things with problems only techs know. If you are an average user and not using top end parts or OC, you won't notice okay, but still it ain't a fair practice.
Remember the 12V power cable debacle?
Posted on Reply
#24
Bwaze
_Under2World_I never done any undervolting in my life but I can’t see in which world it would increase the stability of the system. If anything it should worsen it?
In older system, yes, of course. In newer "auto overclocking" ones this often means auto downclocking, not a system that tries to maintain same clocks no matter what.
Posted on Reply
#25
Vayra86
_Under2World_I never done any undervolting in my life but I can’t see in which world it would increase the stability of the system. If anything it should worsen it?
Less heat, less boost is generally better stability. Lowering the voltage curve isnt going to impact stability as long as there is enough for a given frequency. Too much voltage is not adding stability either: its either stable or its not.
BwazeIn older system, yes, of course. In newer "auto overclocking" ones this often means auto downclocking, not a system that tries to maintain same clocks no matter what.
Or the undervolt provides needed temp headroom to boost better/higher. Thats when you know a part was pushed too much at stock settings.
Posted on Reply
Add your own comment
Nov 19th, 2024 09:42 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts