Friday, August 2nd 2024

Intel Stock Swandives 25% in Friday Trading Spooked by Quarterly Results

The Intel stock on NASDAQ slid 25% as of this writing, on Friday (08/02). This comes in the wake of the company's Q2-2024 quarterly results that held the company's profitability below expectations, leading the company to suspend quarterly dividend payouts starting Q4-2024, and engage a slew of measures to cut cost of revenue by over $10 billion. Among other things, this mainly involves downsizing the company across its various business units. Intel tried to keep investor spirits high by posting updates on how its 5N4Y (five silicon fabrication nodes in four years) plan is nearing completion, and how the company is at the cusp of raking in numbers from the AI PC upswing. To this effect, the company is launching its "Lunar Lake" and "Arrow Lake" processors within 2024, to address the various PC sub-segments. The Intel stock isn't churning in a silo, tech stock prices across the industry are witnessing corrections, although few as remarkable as Intel.
Source: FT
Add your own comment

188 Comments on Intel Stock Swandives 25% in Friday Trading Spooked by Quarterly Results

#126
Am*
EternitBrian Krzanich
He thought, that AMD is no more competition, and focused on high dividends and R&D cuts. Then not only AMD managed to reinvent itself, but also TSMC and ARM became even harder competitor. Also he abandoned GPU and now they are years behind nVidia and AMD while GPU is more profitable than CPU.
Then he faked an affair with employee and resigned. then there was Bob swan, having no clue what to do. And 5N4Y is simply too little too late.
This is a great lessons for investors. When a CEO focuses on high dividends and sacrifice R&D it will cause shares to go up in a short term, but then they will collapse.
Don't forget Paul Otellini too. He was approached first by Apple to create a chip for the iPhone and turned it down because he thought it was too much of a small time project for Intel. Qualcomm and Samsung probably wouldn't exist in the mobile market if Intel had taken that deal and Intel would've been a company several times larger than what it is today. Strike 1.

Once the iPhone proved to be a hit, Intel started working on a low power comparable chip for phones and spent tens of billions on the project -- but they were so afraid of cannibalising their own much higher margin desktop/laptop processors with their mobile offerings, they refused to fund the project sufficiently for it to be a success or release what they had around 2008 (when they would've compared favourably to their competition in the mobile space). This was on par with Kodak refusing to release the digital camera in fears of cannibalising their film business. Intel resurrected the project about half a decade later and their mobile processors were by this point way too slow, inefficient and too far behind the competition. That's strike 2.

Finally, he also canned Intel's Larrabee/discrete GPU project -- which was the right time for them to enter the discrete GPU market (as Intel were riding high on their success at this time, with AMD completely failing with Bulldozer) -- and launching it with their resources available at that time would've by now put them on par against AMD and Nvidia in the discrete and integrated GPU market (they re-used a lot of designs, concepts and tech from that GPU in their current Arc GPUs today. They didn't bother and therefore missed out on the crypto bubbles and the HPC discrete GPU markets that AMD and Nvidia profited handsomely from.

Intel have fumbled too many times to count and fully deserve their current place in the market today.
Posted on Reply
#127
trparky
EasoI am fascinated by the people who think Intel will die from this, or will be allowed to. Like, seriously, people?
Uncle Sam as the most obvious intervention path...
I don't want Intel to die either. I just want them to see them with a black eye and a missing front tooth.

I also want them to learn from this mistake they made. They have got to learn that when it comes to research and development and innovation, you cannot take your foot off the gas pedal. You have to keep that gas pedal floored or your competition will do it for you. (See Apple, nVidia, and AMD) They were king of the hill for way too long and thus they thought that they could take their foot off the gas. *buzzer sound* Wrong answer!
Posted on Reply
#128
remixedcat
eidairaman1We need true laptops like my Dell XPS Gen 1 (Inspiron 9100), that Text Book Sized unit kept a P4 Gallatin Core (Northwoord Extreme) 3.4, 2G DDR, 7200RPM 100GB Hitachi HDD, ATi Mobility Radeon 9800 (Desktop 9700 Pro/9800 R423) GPU cool and heck the GPU was Overclocked, this was in 2004!

A Thicker Hefty Clevo built like the XPS Gen 1 Chassis but modular would be a Good Idea



Ray tracing was an idea in 2004 that supposedly was going to revolutionize graphics, it's a steaming pile of horse poop today and nothing but a stupid gimmick.

Then again only a fool is parted with their money quickly.
I remember those those were tanks! my hubby uses a dell latitude E6430 as his dev laptop and it's socketed I think and it's built so nice as well.
Posted on Reply
#129
eidairaman1
The Exiled Airman
remixedcatI remember those those were tanks! my hubby uses a dell latitude E6430 as his dev laptop and it's socketed I think and it's built so nice as well.
Yes they were 15 inch screens at the time, had good loud audio with a subwoofer, never overheated. It went with me overseas in 2005 as my PC. My GPU in it was overclocked, you can't do that in today's garbage...
Posted on Reply
#130
Vya Domus
Intel has been showing an immense amount of incompetence in the last decade, classic case of giant corporation having a monopoly on an industry and then letting every single one of their advantages slip away.

Everyone went fabless, Intel didn't, used to be an advantage but now they're spending god knows how much on R&D and they can no longer compete with TSMC anyway and they never will.
Giant stockpile of cash wasted on useless acquisition sprees, lots of tech giants are guilty of this but even when these acquisitions were relevant (AI/GPU) they still failed to make a dent in those markets.
Slowly screwing up their data center side of the bushiness, even to this day they don't have a proper response to AMD's Epyc top of the line offerings which literally get you double the cores for the same price, totally inexcusable.

No wonder stock holders are unhappy.
R0H1TDid they? Must've missed that, any reviews highlighting that especially the latter?
They didn't, best case scenario for Intel is that they barely match a 7700XT in RT.
Posted on Reply
#131
Colddecked
PumperAre they years behind? Sure, Intel does not have any high end GPUs, but they managed to beat AMD in ray tracing performance and upscaling quality on their first try.
They're years behind on drivers alone.
Posted on Reply
#132
remixedcat
ChaitanyaMore drama:
I think all the OEMs need to drop em like a hot potato and intel needs to reap what they sow! I just watched this and holy crapola! Intel is weak af and pat needs to be taken out back...
Vya DomusIntel has been showing an immense amount of incompetence in the last decade, classic case of giant corporation having a monopoly on an industry and then letting every single one of their advantages slip away.

Everyone went fabless, Intel didn't, used to be an advantage but now they're spending god knows how much on R&D and they can no longer compete with TSMC anyway and they never will.
Giant stockpile of cash wasted on useless acquisition sprees, lots of tech giants are guilty of this but even when these acquisitions were relevant (AI/GPU) they still failed to make a dent in those markets.
Slowly screwing up their data center side of the bushiness, even to this day they don't have a proper response to AMD's Epyc top of the line offerings which literally get you double the cores for the same price, totally inexcusable.

No wonder stock holders are unhappy.



They didn't, best case scenario for Intel is that they barely match a 7700XT in RT.
ARC graphics was the biggest money pit for them... why bother doing that when we have amd/nvidia gpus that are allready solidified in the market!?

Also Intel pissed a lot of money into SD-WAN and a buncha SaaS companies and networking companies they aren't even going to use.
Posted on Reply
#133
trparky
Rumor has it that when Jim Keller was at Intel, he was supposed to be there a lot longer than the two years that he had been there. He was chased away by internal fighting, backstabbing, and sabotage from other internal groups. He eventually said “f*** this” and left leaving Intel holding the bag with a project only a quarter of the way done.

Intel couldn’t get out of their own damn way to let Jim do his magic and now they’re suffering for it.
Posted on Reply
#136
RandallFlagg
PhilaphlousEconomy is being held up by toothpicks. Market is still up ~3% from the beginning of July so not that big of a deal from yesterday and today... just the daily swing seems worrisome. The average time to recession AFTER the LAST rate hikes from the fed historically is ~11 months... Last rate hike was ~July in 2023 so we're about due....
This time probably due to the Yen carry trade, which itself is a result of distortions in the market caused by central banks. Basically, traders borrow Yen (@ 1%) to buy Dollars (@5%). It was even better while the Dollar appreciated vs the Yen. Now this trade is unwinding, as the value of the dollar is plummeting vs the Yen and the rates in Yen are going up while dollar rates are declining. They are trying to get out from in front of that bulldozer. So, traders are selling their dollars / dollar assets to get back into Yen.
Dollar vs Yen peaked on July 8th.
NASDAQ peaked on July 9th.
There's always an event that ends a bull market, and said event is virtually never seen until too late.
This has the potential to be that event.
Posted on Reply
#137
TheinsanegamerN
Not surprising. Billions of dollars are at stake if intel gets sued, the possibility of a major recall looming, and customer confidence utterly shattered. You know a REALLY good way to get commercial customers to stop buying Intel? Make their Xeons burn themselves out at random, in mission critical hardware.
trparkyIt's like the old Tortoise and the Hare fable. The Hare (Intel) sat down and took a nap while the Tortoise (AMD) went past them.
Exactly. Complacency kills the corporation, and Intel long thought AMD was done.
trparkyI honestly think that a lot of Intel's problems really began when Apple dumped them as a chip supplier and went with their own chips, namely the M-series of chips. That was the beginning of the domino effect that we see now. Apple going with ARM showed the world that x86 was no longer the performance king.
Yeah....no. ARM is still a footnote in the PC market.

Intel has had one core cause of their current problems. In the early 2010s, having eclipsed AMD in technology, intel's efforts began to slow down. Ivy bridge was a disappointing "improvement" over sandy bridge in IPC, as was haswell. Once we got to skylake, intel totally stalled out. Just more bland quad cores with 0 reason to upgrade, people talking on forums about how there was still no justification and why they would keep their CPUs another year or 5. As intel had slowed down, and boring conservative decisions were made instead of bold new ideas, younger engineers went to other companies, like apple, AMD, nvidia, qualcomm, ece.

Intel got complacent.

Then AMD hired Keller. The glove was thrown, but Intel slept. Then ryzen 1000 came out, with an 8 core for $200. Sure, intel was stilla head in benchmarks, but now the writing was on the wall that AMD had something. Still, intel did nothing. They released 6 and 8 core skylake parts with the 8 and 9 series, and those did perform very well. But as ryzen 2 and 3000 came out, it became clear that AMD was catching up quick. Without their young engineers and management, Intel was left with outdated, bloated bureaucracy and ideas, resulting in tiger lake being mobile only, rocket lack sucking the big one, and their iGPU division falling far behind. Intel gained ground back with Alder lake, then promptly tripped over their own shoe.

They should have learned from nvidia, whom despite being in the lead for ages, has never stopped innovating for so long. Even failures like the FX series, or Fermi, were still technical innovations even if AMD was slapping them silly. Now they're playing catchup, and trying to rush this kind of tech is NOT working out.
Posted on Reply
#138
trparky
TheinsanegamerNThey should have learned from nvidia, whom despite being in the lead for ages, has never stopped innovating for so long.
Hence why I said that when it comes to innovation, you must keep that gas pedal floored. Intel let off the gas for many years and these are the results.

And the thing about it is, Intel hired Jim Keller to create a new CPU architecture, namely Royal Core. One of the main stars of Royal Core was something called "Rentable Units" that was supposed to replace Hyperthreading. However, as I alluded to in a prior post and how you alluded to as well, there was a lot of outdated and bloated bureaucracy, and that led to a lot of internal fighting, backstabbing, and sabotage that eventually led to Jim Keller leaving Intel far before Royal Core was anywhere close to being complete. It was said that inside the halls of Intel, many employees looked at Jim Keller, despite his many accolades, as an outsider. They saw him as more of an enemy than someone who could save Intel.

And now we have the fruits of that debacle.
TheinsanegamerNYeah....no. ARM is still a footnote in the PC market.
I disagree. While ARM is rather new in the PC space, it's more than established in the mobile market. And with the advent of Apple's M-Series of chips, it showed that ARM, despite it being a low-power core, could stand toe-to-toe with their more power-hungry cousins while figuratively sipping the power.
Posted on Reply
#139
mkppo
fevgatosNot really. Let's focus on 13th gen for example, since those have higher field failures rates than 14th. Still - it's way lower than zen 3 for example.

13th and 14th gen fail at a much higher rate than 12th gen (both field and shop), but at a much lower rate than zen 3 and zen 4. Right? Is anything I said wrong?

The field failure rates of zen 3 alone are higher / equal to the total failure rates of 13th gen. Nough said, no?
A few important things to note:

Puget systems gives no indication of how these systems have been used. Degradation is really dependent on usage and many of these systems might be lower end or x700 parts and not used 24/7 so they will not really show signs of degradation. This has been well documented as there's not one but a whole plethora of companies running server farms are saying these intel chips don't even last past the 6 month mark. Many server vendors know this and actually jack up a thousand dollars to buy intel for 'support costs' simply because they know they will have to replace the CPU.

Secondly, what you have to look at is field failure rates to spot the degradation. Notice how the 13th gen has more than double the field failure rates compared to 14th gen? That's because these chips are failing due to usage over time. Plot this graph a year later and see them rise way higher regardless of the microcode patch because as GN noted, Intel have been unsuccessfully trying to mitigate this degradation issue as they've known it for a while. And again, these aren't even all systems that are run 24/7.

Basically, Puget's failure rates are not in line with any of the system integrators who are using these chips 24/7 because those people are noticing >25% failure rates if not higher. Notice how not a single vendor who runs their system 24/7 is complaining about Ryzens whereas for 13/14th gen it's not only those people that are complaining but game devs, other companies, studios like epic games etc they're all saying the same thing - if there's a crash it's more likely its's the Intel CPU as it has degraded. Turns out, they've been correct all along even though intel tried to brush this under the rug and point the blame at others for the longest time now.

If you ignore all of the above, think about this . You mentioned Ryzen 5000's having a higher field failure rate. Those have been out since 2020, and looks like they have a 2% field failure rate which sounds about right. Problem is, these 13th gen Intel CPU's don't actually have a 1% failure rate. Puget sells all sorts of Intel systems, and obviously the lower end parts will not have failures especially if they haven't been used much. The issue is with the higher end 13/14th gen parts and the issue is twofold: They degrade quicker than the lower end parts and absolutely disintegrate when used 24/7. Mark my words, if you look at the total percentage of chips that failed for the higher end 13th gen parts, it will be way, way higher than 1% and keep rising with time.
Posted on Reply
#140
remixedcat
Puget also markets to ppl that prolly don't use the systems as much as a custom built gamer pc.

Think rich influencers that aren't home as often or they mostly use a MacBook and only have the puget system for occasional gaming.

Knew someone w a falcon northwest that used a MacBook 80% of the time.

This could be a factor since they don't sell as many are are elite boutique SI not mainstream like cyberpower, Dell, hp, Lenovo, etc...
Posted on Reply
#141
phanbuey
remixedcatPuget also markets to ppl that prolly don't use the systems as much as a custom built gamer pc.

Think rich influencers that aren't home as often or they mostly use a MacBook and only have the puget system for occasional gaming.

Knew someone w a falcon northwest that used a MacBook 80% of the time.

This could be a factor since they don't sell as many are are elite boutique SI not mainstream like cyberpower, Dell, hp, Lenovo, etc...
Whats more likely is the settings and the make/model of motherboard matter. Puget probably has a config that doesn't yolo pump 1.63v+ for 6.2 Ghz on one core every time a lone thread appears.
Posted on Reply
#142
JustBenching
mkppoA few important things to note:

Puget systems gives no indication of how these systems have been used. Degradation is really dependent on usage and many of these systems might be lower end or x700 parts and not used 24/7 so they will not really show signs of degradation. This has been well documented as there's not one but a whole plethora of companies running server farms are saying these intel chips don't even last past the 6 month mark. Many server vendors know this and actually jack up a thousand dollars to buy intel for 'support costs' simply because they know they will have to replace the CPU.

Secondly, what you have to look at is field failure rates to spot the degradation. Notice how the 13th gen has more than double the field failure rates compared to 14th gen? That's because these chips are failing due to usage over time. Plot this graph a year later and see them rise way higher regardless of the microcode patch because as GN noted, Intel have been unsuccessfully trying to mitigate this degradation issue as they've known it for a while. And again, these aren't even all systems that are run 24/7.

Basically, Puget's failure rates are not in line with any of the system integrators who are using these chips 24/7 because those people are noticing >25% failure rates if not higher. Notice how not a single vendor who runs their system 24/7 is complaining about Ryzens whereas for 13/14th gen it's not only those people that are complaining but game devs, other companies, studios like epic games etc they're all saying the same thing - if there's a crash it's more likely its's the Intel CPU as it has degraded. Turns out, they've been correct all along even though intel tried to brush this under the rug and point the blame at others for the longest time now.

If you ignore all of the above, think about this . You mentioned Ryzen 5000's having a higher field failure rate. Those have been out since 2020, and looks like they have a 2% field failure rate which sounds about right. Problem is, these 13th gen Intel CPU's don't actually have a 1% failure rate. Puget sells all sorts of Intel systems, and obviously the lower end parts will not have failures especially if they haven't been used much. The issue is with the higher end 13/14th gen parts and the issue is twofold: They degrade quicker than the lower end parts and absolutely disintegrate when used 24/7. Mark my words, if you look at the total percentage of chips that failed for the higher end 13th gen parts, it will be way, way higher than 1% and keep rising with time.
Those puget statistics are based solely on 700k and 900k CPUs. They say so in their article.

Ryzen 5000 don't have a 2% failure rate but closer to 5%.

The reason you don't see complaints - I can bet it's the 3 following.

1) Ryzen 7000 doesn't experience degradation. It just has a very high upfront failure rate. Meaning, it basically comes dead from the factory - at an alarming rate I'd argue. That's "easy" to notice before you deploy a server - so you never end up deploying one with a failed CPU - so you never have any crashes. With Intel the percentage even though lower, is split. 1% comes doa from the factory, another 1% dies after deployment. So naturally, youll have complaints cause some of your deployed intel servers are crashing.

2) Very important, not everyone is using puget's settings. Puget is running intel and amd defaults which naturally leads to lower failure rates compared to everyone else.

3) AFAIK everyone and their mother is using Intel. Intel outsales what, 8 to 2? Naturally there will be more complains about the market leader simply cause there are way more affected users, even if the percentages of failure might be similar.
Posted on Reply
#143
mkppo
fevgatosThose puget statistics are based solely on 700k and 900k CPUs. They say so in their article.

Ryzen 5000 don't have a 2% failure rate but closer to 5%.

The reason you don't see complaints - I can bet it's the 3 following.

1) Ryzen 7000 doesn't experience degradation. It just has a very high upfront failure rate. Meaning, it basically comes dead from the factory - at an alarming rate I'd argue. That's "easy" to notice before you deploy a server - so you never end up deploying one with a failed CPU - so you never have any crashes. With Intel the percentage even though lower, is split. 1% comes doa from the factory, another 1% dies after deployment. So naturally, youll have complaints cause some of your deployed intel servers are crashing.

2) Very important, not everyone is using puget's settings. Puget is running intel and amd defaults which naturally leads to lower failure rates compared to everyone else.

3) AFAIK everyone and their mother is using Intel. Intel outsales what, 8 to 2? Naturally there will be more complains about the market leader simply cause there are way more affected users, even if the percentages of failure might be similar.
Please read posts properly without skimming over. See what I wrote again and see the graph - Ryzen 5000's have a 2% field failure rate and it was released in 2020. Field failures increase in time, so for a CPU to have 2% 4 years after it's launch is pretty good. Also, your "very high" upfront failure rate of 4% with Ryzen 7000's is literally of no concern, because their stats don't align with anyone else simply because Puget sells way more Intel systems and their statistical sample for AMD are pretty low. What's apparent in that graph though, is that next to no 7000 CPU's are failing over time, unlike 13th gen.

Secondly, degradation happens with extended use case. This graph you keep spamming doesn't include all systems running 24/7 so it shows nothing. Also the CPU's in question with high degradation are the 900k's. Including 700k's only skew it in favour of them.

Also, you really think Intel are having all these issues and all the complains are coming in because their CPU's have a 1% defect rate with usage and the percentage of failures are the same as AMD? MULTIPLE server farms are reporting 25% - 50%, but you will use this Puget's chart who don't even have systems running 24/7 as it fits your narrative. But the source and data is flawed man, get that.

Also, no, Intel doesn't outsell AMD 80:20. Where do you even get that figure? Even Puget, who have historically leaned on Intel's side have said it used to be 80:20 till 2021 when AMD systems' demand increased. They don't even say what the current ratio is.

No, people aren't complaining because there's a 1% failure rate of the 'market leader'. A whole f ton of people are complaining because there's wayyyy more than 1% failure rate which intel have been unsuccessfully trying to cover up. Even rudimentary stats from europe shows 13th gen have more than 5% return rate when no CPU in recent times have anywhere close to 2%. There are many stats out there, just saying.
Posted on Reply
#144
JustBenching
mkppoPlease read posts properly without skimming over. See what I wrote again and see the graph - Ryzen 5000's have a 2% field failure rate and it was released in 2020. Field failures increase in time, so for a CPU to have 2% 4 years after it's launch is pretty good. Also, your "very high" upfront failure rate of 4% with Ryzen 7000's is literally of no concern, because their stats don't align with anyone else simply because Puget sells way more Intel systems and their statistical sample for AMD are pretty low. What's apparent in that graph though, is that next to no 7000 CPU's are failing over time, unlike 13th gen.
Ok, so let's ignore 13th and 14th field failure rates since they are kinda "new" and unfair to compare with the older ryzen 5000. So ryzen 5000 - the ones you are saying have a pretty low field failure rate, are worse than 12th gen, worse than 10th gen, and almost on par with 11th gen. All of these chips are as old or older than zen 3. So how do you consider 2% a good failure rate, when on top of that they have one of the highest DOA failure rates as well?
mkppoSecondly, degradation happens with extended use case. This graph you keep spamming doesn't include all systems running 24/7 so it shows nothing. Also the CPU's in question with high degradation are the 900k's. Including 700k's only skew it in favour of them.
How do you know how the systems are run? These are sold as workstations, so im sure they are used more than the average. Still, what difference does that make? That argument applies to both amd and intel chips, neither are used "24/7" (like you claim).
mkppoAlso, you really think Intel are having all these issues and all the complains are coming in because their CPU's have a 1% defect rate with usage and the percentage of failures are the same as AMD? MULTIPLE server farms are reporting 25% - 50%, but you will use this Puget's chart who don't even have systems running 24/7 as it fits your narrative. But the source and data is flawed man, get that.
Those multiple server farms are using Intel. Those multiple server farms probably aren't using puget's settings.

The only stats from Europe i've seen is mindfactory that has 13 and 14th gen at 1% return rate. For reference alderlake was at 0.48%.
Posted on Reply
#145
the54thvoid
Super Intoxicated Moderator
There are a whole bunch of discusses caveats in the article using those graphs. I read it before commentating. Most importantly, Puget tunes down their systems away from mobo vendor defaults (they've done this due to the prior 11th gen issues). Therefore, Puget's graph of failures shows the failure rates of 'safe' chips. Also, notably, Puget discusses the degradation over time issue, and they admit they think the rates will increase for their newer installations.

In brief, the graph which shows Intel in a better light is from a system builder that proactively tries to mitigate chip voltage problems of those chips. It's not default settings. It's not demonstrative of the actual issues in plug and play default installations.
Posted on Reply
#146
JustBenching
the54thvoidThere are a whole bunch of discusses caveats in the article using those graphs. I read it before commentating. Most importantly, Puget tunes down their systems away from mobo vendor defaults (they've done this due to the prior 11th gen issues). Therefore, Puget's graph of failures shows the failure rates of 'safe' chips. Also, notably, Puget discusses the degradation over time issue, and they admit they think the rates will increase for their newer installations.

In brief, the graph which shows Intel in a better light is from a system builder that proactively tries to mitigate chip voltage problems of those chips. It's not default settings. It's not demonstrative of the actual issues in plug and play default installations.
Yeap. I've said it on the other thread too, puget's numbers are when running intel / amd defaults. If you are not doing that, I wouldn't be surprise if failure rates tripled.
Posted on Reply
#147
the54thvoid
Super Intoxicated Moderator
fevgatosYeap. I've said it on the other thread too, puget's numbers are when running intel / amd defaults. If you are not doing that, I wouldn't be surprise if failure rates tripled.
But even with these mitigations in place, the guy from Puget says:
it hasn't seen failures in the field this high since the 11th-Gen processors. "We’re seeing ALL of these failures happen after 6 months, which means we do expect elevated failure rates to continue for the foreseeable future and possibly even after Intel issues the microcode patch
It mentions that's currently only 5-7 per month, so not huge numbers for them, though expected to rise.

The point is, Puget aren't representative of the DIY build industry, or other system builders using default mobo (or worse, turned up) settings. Using their data is very much cherry picking the most favourable outcome for Intel.
Posted on Reply
#148
JustBenching
the54thvoidBut even with these mitigations in place, the guy from Puget says:



It mentions that's currently only 5-7 per month, so not huge numbers for them, though expected to rise.

The point is, Puget aren't representative of the DIY build industry, or other system builders using default mobo (or worse, turned up) settings. Using their data is very much cherry picking the most favourable outcome for Intel.
I'm very confident that their 14th gen field failure rates will go up to at the very least MATCH 13th gen. But still even with that in mind the failure rate will remain relatively low. 13th gen are out for 20+ months and it seems to be okay.

Also field failure rates are useless for us end consumers. You don't buy cpus pretested by puget. Your failure rates - as long as you are using intel and amd defaults - will be the combination of shop + field failure rates as an end user. For people that don't know and don't care what a bios is 12th gen is their best bet, followed by zen 4 and zen 3 (im ignoring 10th cause they are too old by now). For people that do know what bios is, it's intel all the way - again - according to puget's data.


Whats fascinating is that somehow it's construed as though high shop failure rates is better than field failure rates. Basically shipping CPUs that are already dead or a week away from dying is better than shipping CPUs that are 6 months away from dying. I just can't get behind that. The numbers from zen 3 to zen 4 show a huge drop of the level of QC, not the other way around.
Posted on Reply
#149
john_
londisteThere are a whole lot more to be found.

The evolution of upscaling algorithms has been relatively straightforward - old image-based upscale, then for a long while Lanczos variations with some sharpening on top, then a temporal component got added (think TAAU) and latest rage is some "AI"-derived stuff running on matrix operations (like Tensor or XMX cores). AMD is step behind in this from both Nvidia and Intel. One could guess the AI cores are not there yet and that is why.
There is a campaign against FSR especially. Maybe someone doesn't want to have a repeat of FreeSync.

As for Tim "DLSS for life", as you can see in the thumbnail used in HU video, he is using the Nvidia goggles to check FSR's quality in more detail.

Also Hardware Unboxed is probably the only, or at least one of the few from the big YouTube tech channels that knows nothing about Intel problems this period. Instead they rushed to make a video about AMD's Zen 5 having problems to help with Intel's damage control. They invented "issues" with Zen 5 THE SAME day 9000 series was delayed.
They didn't done the same about the latest news about Nvidia's Blackwell having a design flaw. No, they are silent there.

In my opinion, HU became as big as it was necessary to start getting gifts from big corporations. Well, if Intel's expenditure cuts affects them, we might start seeing videos about the advantages of FSR over XeSS.

People should stop posting Hardware Unboxed as one of the objective channels. They are NOT.
The whole internet talks about Intel. They are silent.
There is a rumor about AMD. They rush to make a video.
There is a rumor about Nvidia Blackwell. Again, silence.

JMO always.
Posted on Reply
#150
mkppo
fevgatosOk, so let's ignore 13th and 14th field failure rates since they are kinda "new" and unfair to compare with the older ryzen 5000. So ryzen 5000 - the ones you are saying have a pretty low field failure rate, are worse than 12th gen, worse than 10th gen, and almost on par with 11th gen. All of these chips are as old or older than zen 3. So how do you consider 2% a good failure rate, when on top of that they have one of the highest DOA failure rates as well?



How do you know how the systems are run? These are sold as workstations, so im sure they are used more than the average. Still, what difference does that make? That argument applies to both amd and intel chips, neither are used "24/7" (like you claim).


Those multiple server farms are using Intel. Those multiple server farms probably aren't using puget's settings.

The only stats from Europe i've seen is mindfactory that has 13 and 14th gen at 1% return rate. For reference alderlake was at 0.48%.
Why would you just ignore 13th and 14th gen field failures before going on a journey through older CPU's? I mean, that's literally what Intel is having an issue with and everyone is talking about. And it's not 1%

Secondly, see post#141. You're misunderstanding the whole issue which is primarily higher end 13/14 gen's degrading faster when used 24/7 under various loads (hereby accelerating the degradation curve). This chart doesn't represent that. It represents a relatively small sample size of AMD CPU's and differences between them and intel (both in field and shop failures) are largely negligible at best. These are all also running with Puget's settings which we don't know and hence we don't know how they perform either, both of which are huge caveats. But they did mention the rise in field failures for 13th gen so it's a space to watch out for because you're also forgetting a lot of people have just recently realised their "out of video memory" errors are actually that 13900K.

The reason most people aren't bothered by Puget's shop failure rate is because it literally doesn't align with any of the other larger retailers who report extremely low numbers of shop failures (from both camps, mind you). Also they're easy to diagnose, don't really affect the used market and numbers are below 1% from both camps so all is well on that front. The only sore thumb that sticks out is 13900K's rising in returns with time (even for Puget albeit less than everyone else). There have been numerous reports over the past few weeks of RPL returns being higher than prior gens and AMD from major European retailers. Alza.sk exposes their returns rate in their website and look at the 13900K being abnormally high at 5%. Then there's all these devs and farms who did some heavy lifting with these CPU's and they died. And judging by a 2x rise in July field failures for 13th gen for Puget, this graph will probably look different in a few months.
Posted on Reply
Add your own comment
Dec 17th, 2024 23:18 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts