• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

Is Intel going to deliver a processor/chipset worth waiting for?

I never doubted the significance of the performance gain, did I?

All I said was that the above things are architecturally pretty much the same thing. This statement is not supposed to have any kind of negative or positive connotation attached to it.

Edit: If you manage to slap some magical bullshit back wing onto your car that produces enough downforce to make it 0.5 seconds faster in 0-60, that's bloody awesome, really, but I'm not gonna pretend that it suddenly became an entirely different car. Again, there is no negativity, or any other emotion or value judgement about it. It's just fact.

Dude, you are gaslighting pretty hard.

It's not that many pages back where you were saying Zen 2 -> Zen 3 was a whole new uArch while saying Intel was stuck on skylake forever and that raptor lake was a new uArch vs Alder Lake. If you mark off 'cache' on a diagram and look at what changed inside the cores, nothing much changed on any of those.

AMD hasn't really done what I would call a new uArch since Zen 2, and even that is giving them extra credit for doing CCDs with Zen 2.

In the same sense, Meteor Lake despite using tiles is still basically RPL aka ADL on a new node with tiles. But that was expected.

Zen 5 is clearly a new uArch. Going from 4 ALUs to 6, there are 2X wider and more FP store pipelines and it's able to handle AVX-512 in one clock instead of two. This is probably where Zen will be the next 4-5 years, future tweaks aside. There's a clear distinction between doing something like that, which is foundational for the future, vs mucking with cache, increasing frequency, or dropping extra cores somewhere.

Arrow Lake is just Meteor Lake, aka Alder Lake enhanced v4, for desktop. I'm sure we'll see more performance gain but it's the same uArch. I never had high expectations of big performance gains here, b/c it's just a modification of the same old stuff.

Lunar Lake will be the next truly new uArch from Intel, but it's coming for laptops first end of 2024/start 2025.

I might actually get one of these next gen uArch laptops if they do well. Desktop was great during covid but I normally travel a fair bit and like to be able to take it all with me.
 
@Dragam1337 @fevgatos

This thread is called:

Is Intel going to deliver a processor/chipset worth waiting for?​


You've both decided to have a keyboard conflict and derail it. Thread banned, both.
 
Dude, you are gaslighting pretty hard.

It's not that many pages back where you were saying Zen 2 -> Zen 3 was a whole new uArch while saying Intel was stuck on skylake forever and that raptor lake was a new uArch vs Alder Lake. If you mark off 'cache' on a diagram and look at what changed inside the cores, nothing much changed on any of those.

AMD hasn't really done what I would call a new uArch since Zen 2, and even that is giving them extra credit for doing CCDs with Zen 2.
Perhaps not within the cores, but with the whole CCD design, which is equally important, in my opinion. But if you don't consider that as new or important as I do, that's fair, you have every right not to.

I don't think what you're saying contradicts what I'm saying - you're just looking at it from a different perspective, so I don't understand what gaslights you.

In the same sense, Meteor Lake despite using tiles is still basically RPL aka ADL on a new node with tiles. But that was expected.

Zen 5 is clearly a new uArch. Going from 4 ALUs to 6, there are 2X wider and more FP store pipelines and it's able to handle AVX-512 in one clock instead of two. This is probably where Zen will be the next 4-5 years, future tweaks aside. There's a clear distinction between doing something like that, which is foundational for the future, vs mucking with cache, increasing frequency, or dropping extra cores somewhere.

Arrow Lake is just Meteor Lake, aka Alder Lake enhanced v4, for desktop. I'm sure we'll see more performance gain but it's the same uArch. I never had high expectations of big performance gains here, b/c it's just a modification of the same old stuff.

Lunar Lake will be the next truly new uArch from Intel, but it's coming for laptops first end of 2024/start 2025.

I might actually get one of these next gen uArch laptops if they do well. Desktop was great during covid but I normally travel a fair bit and like to be able to take it all with me.
I agree with all of this.

Personally, I want much better efficiency, and possibly another heterogenous homogenous (edited - Latin isn't my strong suit) architecture from Intel before I'll look at buying from them again, but I can dream on, I guess.
 
Last edited:
The way to look at it is never write anyone off.

Remember when AMD released AMD64 then X2? When Intel had P4, P4HT and then pentium D?
Intel then swung back with Core and Core2 and the relatively equal fight with Phenom and AMD.
Then Intel hit Nenhalem/Sandy bridge and AMD released Snoozedozer and we all know how that went ¬_¬ Welcome to no competition for half a decade and effectively 2700k rebadged 5 times over.
Then Zen came out and made intel take stock, then Zen 2 and especially Zen 2+ and suddenly the fire is under Intels ass again.



The Big difference is the market share that Intel "securely" had by different means isnt anywhere near as secure as before especially in the datacenter spaces. So how long can/will Intel be happy with losing market share slowly year on year in the data center space.
 
The way to look at it is never write anyone off.

Remember when AMD released AMD64 then X2? When Intel had P4, P4HT and then pentium D?
Intel then swung back with Core and Core2 and the relatively equal fight with Phenom and AMD.
Then Intel hit Nenhalem/Sandy bridge and AMD released Snoozedozer and we all know how that went ¬_¬ Welcome to no competition for half a decade and effectively 2700k rebadged 5 times over.
Then Zen came out and made intel take stock, then Zen 2 and especially Zen 2+ and suddenly the fire is under Intels ass again.



The Big difference is the market share that Intel "securely" had by different means isnt anywhere near as secure as before especially in the datacenter spaces. So how long can/will Intel be happy with losing market share slowly year on year in the data center space.

The uArch is important precisely because of what happened to Intel. New uArch actually last 4-5 years. If you stay on it too long, you wind up like Intel. If you mess up and go to a crappy new uArch, you wind up like AMD will Bulldozer.

So the stakes are high with Lunar Lake and Zen 5.

Edit: I should probably say, Intel never had a uArch design problem. Alder Lake uArch was finished in 2018 and should have launched in 2019 - which would have been vs Zen 2. They just couldn't get off Skylake thanks to node issues, so they got core count limited.
 
Last edited:
Apperantly some people got nothing else to do on a Thursday... :D

I was mostly OK with that. A lot of differing views were expressed in here crossing up preconceived or debunked conceptions. What I saw at least was mostly friendly back and forth respecting vitriol be kept to a minimum. In life you accept some chaff needs to accompany the wheat.

A few people now have time to find other use for a Friday.
 
The uArch is important precisely because of what happened to Intel. New uArch actually last 4-5 years. If you stay on it too long, you wind up like Intel. If you mess up and go to a crappy new uArch, you wind up like AMD will Bulldozer.

So the stakes are high with Lunar Lake and Zen 5.

Edit: I should probably say, Intel never had a uArch design problem. Alder Lake uArch was finished in 2018 and should have launched in 2019 - which would have been vs Zen 2. They just couldn't get off Skylake thanks to node issues, so they got core count limited.

The design uArch design problems do sort of happen when you get stuck on a node for a while. This has happened to both AMD & Intel. Recently Intel was to be said to be pushing off most of its fab to be separately. Now that looks similar to what AMD did, but I think it has to do with the ability to take their designs out to others to test them at much faster rate than just using their own fabs. The own fabs seem to be stifling development for them. When AMD was stuck with just Global foundries, they had limited choices. Like when & what times they could spin out a new design on the node they do have. It's hard to also do validating when you're worried about a lot of productions being slowed down for an experiment.
 
The question once again is will Intel deliver a processor and chipset worth waiting for?
Depends heavily on usage and task.

Hypothetically speaking, I bet 90% of users wouldn't be able to tell a noticeable between a midrange 12th gen and a future midrange 15th gen in a blind test.

I mostly play single player games and only do moderate intensity applications so for my usage i wouldn't pass up a 12th or 13th gen on a good sale or pre-owned in the classified
 
The own fabs seem to be stifling development for them. When AMD was stuck with just Global foundries, they had limited choices.
Its more of the fact that previously Intel has a massive lead in Fab development vs GF and even TSMC back in the core 2xxx/3xxx/4xxx series days. However those days are long gone and Intel is infact suffering the same things AMD did when they were tied to GF. Intel had access to 22nm while AMD was stuck on 32nm and TSMC was on 28nm.

Isnt the 13th/14th gen currently on effective 7nm vs AMD on 5nm IIRC?

AMD also was contractually locked into GF after they divested their fabs to them and i remember supposedly those contract terms were BRUTAL to AMD

Intel is just hedging their bets in case their 20a/18a doesnt deliver/suffers setbacks as when they get 20 and especially 18a they may have a slight node lead over TSMC especially with how Apple seems to buy out all initial production of the latest node from TSMC. But if 14nm++++++ happens again they can atleast tap into TSMC and even Samsung which I dont think they could do both from a technical perspective (designs not optimised for TSMC/Samsung Fab quirks/design choices) and politically (why use a 3rd party when we always used our own "superior" fabs)
 
Its more of the fact that previously Intel has a massive lead in Fab development vs GF and even TSMC back in the core 2xxx/3xxx/4xxx series days. However those days are long gone and Intel is infact suffering the same things AMD did when they were tied to GF. Intel had access to 22nm while AMD was stuck on 32nm and TSMC was on 28nm.

Isnt the 13th/14th gen currently on effective 7nm vs AMD on 5nm IIRC?

AMD also was contractually locked into GF after they divested their fabs to them and i remember supposedly those contract terms were BRUTAL to AMD

Intel is just hedging their bets in case their 20a/18a doesnt deliver/suffers setbacks as when they get 20 and especially 18a they may have a slight node lead over TSMC especially with how Apple seems to buy out all initial production of the latest node from TSMC. But if 14nm++++++ happens again they can atleast tap into TSMC and even Samsung which I dont think they could do both from a technical perspective (designs not optimised for TSMC/Samsung Fab quirks/design choices) and politically (why use a 3rd party when we always used our own "superior" fabs)

Yes Intel's 10nm / Intel 7 has always been equivalent to TSMC N7 which is constantly called "7nm". TSMC basically redefined the meaning of a node size, so for marketing reasons Intel had to follow suit and rename to Intel 7. Intel's 14nm was both higher power and more dense than TSMC "12nm", so those node names have been disconnected from reality for some time.

There's also the fact that Intel 3 is a reconfigured Intel 4 node, and they need that equipment to ramp up to Sierra Forest and Granite Rapids. I think they simply did not have capacity to put everything on Intel 4 desktop/laptop and have enough equipment left to setup Intel 3 for their new data center chips. We should see the Intel 3 based Sierra Forest middle of this year.
 
The uArch is important precisely because of what happened to Intel. New uArch actually last 4-5 years. If you stay on it too long, you wind up like Intel. If you mess up and go to a crappy new uArch, you wind up like AMD will Bulldozer.

So the stakes are high with Lunar Lake and Zen 5.

Edit: I should probably say, Intel never had a uArch design problem. Alder Lake uArch was finished in 2018 and should have launched in 2019 - which would have been vs Zen 2. They just couldn't get off Skylake thanks to node issues, so they got core count limited.
I wanted to say I'm waiting for a homogenous architecture from Intel in my last post (I'm not a fan of Windows 11). I'm too tired to do Latin on a Friday. :D Anyway... :)

I think both companies had one major design problem each. For AMD, it was definitely Bulldozer, and for Intel, it was the first generation Atom. I remember being asked to install Windows on one, and man... I'd never seen such a slow IT device in my life! The only difference is that Bulldozer was supposed to be the bread and butter of AMD, while Atom was/is just a side project for Intel.
 
No no lived! not just read...

Too cynical. They have done it before, and have the research capabilities and money to do it again. They already have BPD, and foveros that TSMC and AMD do not, so don't be too quick to write them off. They have the speed, just not the power efficiency and having just switched from monlithic to tiles, have got the means. I would put a bet on it.
 
I think its the scalability of the current uArch from Intel is the main issue. For AMD it is not hard for them to scale from 6 cores to 192 cores with relevant CCD design and IO die.
Intel its been much harder and they have no answer for the extremely high density core designs AMD have currently let alone what that theoretical power draw may entail.

Now with things like Foveros it will be quite interesting to see what they do with it as looking at it they could possibly layer the equivalent of AMDs IO die BELOW their cores meaning physically chips are smaller. Infinity Fabric doesnt become a bottleneck like it does with AMD currently and it should mean they can then scale the compute/cores much easier.
 
I think its the scalability of the current uArch from Intel is the main issue. For AMD it is not hard for them to scale from 6 cores to 192 cores with relevant CCD design and IO die.
Intel its been much harder and they have no answer for the extremely high density core designs AMD have currently let alone what that theoretical power draw may entail.
Do you think that there is some fundamental reason why Intel even now could not build a monolithic chip with any number of P-cores between say 1-12 for a PC?

You do not need many cores in an office or home PC. AMD can get away with just 8 cores in their best available gaming CPU.

Professional workstation and server CPUs are a different case.
 
Do you think that there is some fundamental reason why Intel even now could not build a monolithic chip with any number of P-cores between say 1-12 for a PC?
Because what do you do. Every single chip is based upon that 12 core die and disable/use defective dies further down the stack?
Or do you have to spin a new design for say 6 cores and lower and one for 12 to 6 cores?

One means that 6 core SKUs and lower are in theory loss leaders as the cost of the Die is massive vs selling price.

The other means you now have to engineer two seperate designs and then decide how to market them. What happens to those 12 core dies that only have 6 cores working. Do you salvage them and sell them alongside the native 6 core design? What happens to power/heat differences between those two dies.
 
I wanted to say I'm waiting for a homogenous architecture from Intel in my last post (I'm not a fan of Windows 11). I'm too tired to do Latin on a Friday. :D Anyway... :)

I think both companies had one major design problem each. For AMD, it was definitely Bulldozer, and for Intel, it was the first generation Atom. I remember being asked to install Windows on one, and man... I'd never seen such a slow IT device in my life! The only difference is that Bulldozer was supposed to be the bread and butter of AMD, while Atom was/is just a side project for Intel.

Yeah, I forgot about Atom. The first generations were definitely a fail, which is why Intel never got a big part of the mobile space.

They did thrive, and still do, in the embedded space though. There's a reason for that, early Atom didn't do out of order execution. A lot of embedded systems rely on highly deterministic performance metrics - i.e. extremely consistent. Out of order execution can be 'jerky' in terms of performance, as it depends on what all is waiting to execute how a particular set of instructions will perform (we're talking assembly / machine language level here), while in-order is consistent. Systems are so fast now that I don't think it makes as much difference today, but 10+ years ago it wasn't that way. I know Atom was used extensively in PLCs for automation.
 
I was mostly OK with that. A lot of differing views were expressed in here crossing up preconceived or debunked conceptions. What I saw at least was mostly friendly back and forth respecting vitriol be kept to a minimum. In life you accept some chaff needs to accompany the wheat.

A few people now have time to find other use for a Friday.
Hi,
Yeah 12 pages and the question in general is a obvious yes

Same can be said for amd as well
Time passes and tech/ chips progress and are usually going to be better :doh:

Only dictator to whether one should wait is driven by need.
 
I mean obviously we can't know until it releases.

But my view so far is something like... I think arrow lake will be good... but at the same time not so much better than whats out now to justify spending that time waiting if you need to upgrade now.
 
it's tick-tock at the same time. new arch and node. usually is either one, or multiple ticks no tocks. so we wait. let's hope it's monolithic because the only good is getting rid of the igfx completely for the KF, but a separate northbridge is not a good idea. this time intel delivers.
 
Because what do you do. Every single chip is based upon that 12 core die and disable/use defective dies further down the stack?
Or do you have to spin a new design for say 6 cores and lower and one for 12 to 6 cores?

One means that 6 core SKUs and lower are in theory loss leaders as the cost of the Die is massive vs selling price.

The other means you now have to engineer two seperate designs and then decide how to market them. What happens to those 12 core dies that only have 6 cores working. Do you salvage them and sell them alongside the native 6 core design? What happens to power/heat differences between those two dies.
The answer to these question depends on the defect rates in the chip manufacture, intended final products and their quantity. I think it would be a pretty easy optimisation task to figure out, how many different chips should be produced.

Anyway, Intel is obviously going to follow the route of gluing different modules together to make a CPU, but I still think, that most of the office/home PC market could be served by just two monolithic chip designs.
 
Obviously they will, writing off a company of Intels caliber as completely incapable is silly. If AMD managed to pull through some really bad times when they were releasing straight trash in the CPU market, then Intel surely will bounce back.
Hell, not like their current offerings are awful. They might be somewhat inferior to AMDs in some regards, but still solid. We are a far cry from the absolute slaughter that was Bulldozer vs Sandy/Ivy.

This is absolutely true.

No where close or even a comparison to the slaughter of Bulldozer and variants vs Sandy/Ivy and Haswell and such before Zen vs Intel.

In fact in consumer desktop and laptop space not even a clear cut performance advantage for AMD if you take power consumption out the equation, Intel Raptor Cove P cores at clock normalized levels still provide belter IPC than Zen 4 by like 7% Even Golden Cove has a slight IPC lead over Zen 4 in most workloads though its tiny by like 2-3% at most if that.

So Intel in desktop and mobile space far from inferior to AMD based on above.

However when it comes to power draw and even Golden Cove having 2-3% better IPC and Raptor Cove like 7% better IPC only, power draw is enormously higher. And while thats not a concern to most enthusiasts who have huge cases and do not care about noise and/or water cool, it matters to those who like noise dampened cases and quiet systems with an RTX 4090 which in of itself dumps a lot of heat into the case. To add on top of that so much extra heat dumped into case heating up other components with an Intel CPU vs an X3D or other Ryzen 7000.

Yes LGA 1700 10nm nodes are easier to cool compared to 7nm and of course 5nm at same power draw, but that almost is irrelevant when power draw is so much higher with Intel that it still runs just as hot or hotter on dual tower air cooler. Plus the excess heat dumped into case heating up other components faster.

To me that matters as I use a Silent Base 802 with an RTX 4090 and oh boy with Intel it heats up more than with a 7800X3D despite 7800X3D being much harder to cool relative to its lower power but oh it dumps' so much less heat into the case than Raptor Lake.

Though both have pros and cons on each platform.

AMD platform is more advanced with PCIe lanes to CPU as you can get 2 dedicated X4 direct to CPU NVME without sacrificing the full X16 direct GPU lanes/ Where as Intel only has one. Though Intel at chipset level if a little more stable. AMD at CPU level probably more stable given Raptor Lake manual overclock random stability issues when thought to be stable then boom its not plus the inconsistent Intel DDR5 memory controller while more performant and capable of much higher speeds more random instabilities compared to AMD where if it boots at EXPO 6000 or DOCP1 its usually stable and no randomness.

SO both have pros and cons and competition between each other.

Now in the server and enterprise space, AMD is beating Intel badly as Sapphire Rapids despite being Golden Cove has severely slower L3 cache and gimped IPC compared to consumer/client Golden Cove.

I mean the Intel 12400 spanks Xeon 2455X at 200MHz less in CInebench 2024 single thread:



SO while maybe not the slaughter in enterprise space that the Bulldozer days in consumer space was, AMD is whipping Intel there for now though Intel can survive more on brand name in that space.

SPR has IPC slightly worse than Zen 3 per Cinebench 2024 which is hugely disappointing as no HEDT option for more than 8 P cores with Golden Cove or better IPC unlike the Broadwell E and prior days which had same IPC as desktop Haswell and broadwell. Nevermind he fact also on a mesh which sucks for gaming. That was also case for Skylake X and Cascade Lake X though I do not think the later 2 had the gimped IPC SPR has compared to client versions despite gimped latency.

Latency with 4 KB Pages:​



To answer the actual thread title question though as to whether Intel will deliver a processor or chipset worth waiting for, well I hope so.

I want something with more than 8 P cores on a single ring bus with the Golden Cove or Raptor Cove architecture and not just more e-cores and hybrid arch. I would buy such a CPU despite its power requirements.

I also wish AMD would have a CPU with more than 8 cores om a single CCD. They have CPUs with more than 8 P cores, but they are dual 8 core or 6 core CCDs and cross CCD latency terrible for gaming.

While true that right now 8 cores is enough or more than enough for high end gaming, it would be nice to have some head room without having to go e-core route with the scheduling issues or dual CCD scheduling issues for the rare games that may scale beyond more than 8 cores without sacrificing latency on all other games as a set and forget it solution without resorting to process lasso and such.

Intel has ability to do it so come I am wafting for it and its worth waiting for so deliver it Intel.

I am not interested in max 8 P core with no HTC and only 5% single thread uplift Arrow Lake with advanced e-cores. I would rather go 8 core Zen 5 X3D in such case despite that AMD is also going to max at 8 cores per CCD on Zen 5 on desktop space and even HEDT space, they will have a 16 core CCD, but those will be gimped cache Zen 5C cores, not regular Zen 5 cores on single CCD.

Its easier for Intel to make such a CPU than AMD as AMD is constrained by TSMC process and they just have all 8 core CCDs coming off and can only mark 2 defective or all good and sell 6 and 8 core single CCD CPUs or dual CCD 6 and 8 core CPUs which become their 12 and 16 core counterparts and they have been doing that since Zen 3 and appears will continue with Zen 5 as that is cheaper than separate CCDs with different core counts.

Intel on other hand makes their own and they even had separate dies for Comet Lake with the 10 core one being a different die form the 8 core and below.

So come on Intel deliver a 10-12 P core Golden Cove or Raptor Cove on a ring bus CPU on LGA 1700 or one on Arrow Lake whatever new Tile based tech or process its on. You have a buyer in me, If you just put more e-cores in, I am sticking with 8 cores X3D chips or 8 core regular singular CCD ones from AMD and will just deal with not taking advantage of games scaling beyond 8 cores and whatever penalty it is.

If it can do 5.5Ghz at same power that is still 1Ghz more. pretty good.
Im still not convinced that intel has a working 20A, or simply pulls a Bartlett lake 12 core and calls it a day.
the best part is if the "F" CPU arives without a GPU tile. So we don't overpay for something useless that just sits there.

I'd be very happy if Intel instead pulled a 12 P core Bartlett Lake (better be on a ring bus even if they ditched Arrow Lake 20A node). I would buy that and keep instead of Zen 5 even if/when Zen 5 has 15-20% or even higher single thread uplift as they will still have only 8 cores per CCD and I would still have the great Raptor Cove IPC but an extra 2-4 cores and as games become more threaded I would do just fine plus extra cores for background tasks without the potential e-core hybrid arch headaches.
 
Last edited:
I'd be very happy if Intel instead pulled a 12 P core Bartlett Lake (better be on a ring bus even if they ditched Arrow Lake. I would buy that and keep instead of Zen 5 even if/when Zen 5 has 15-20% or even higher single thread uplift as they will still have only 8 cores per CCD and I would still have the great Raptor Cove IPC but an extra 2-4 cores and as games become more threaded I would do just fine plus extra cores for background tasks without the potential e-core hybrid arch headaches.

Bartlett's supposed to be Raptor Cove-based. I do question the practical utility of the 12 P-core configuration for gaming over the current hybrid processor line, but priced right, sounds like a fun chip to me!
 
Bartlett's supposed to be Raptor Cove-based. I do question the practical utility of the 12 P-core configuration for gaming over the current hybrid processor line, but priced right, sounds like a fun chip to me!


How would you compare it to 8 only P cores form either AMD or Intel (13700K or 12700K or 14700K e-cores disabled or 7700X, 7800X3D) and no e-cores for gaming? Is there any merit in having more than 8 cores for high end gaming with RTX 4090 then 5090??

I want a 12 P core Raptor Cove as I am not fond of e-cores and WIN11. Also not fond of AMD dual CCDs. I want a 10900K setup with the current CPU archs and also not be stuck at PCIe Gen 3.

As much as I would love to see 12 P core Bartlett Lake on ring bus, I hav emy doubts its going to happen despite rumors. SOme rumor even say a Xoen D which probably means a fused off Sapphire Rapids and put into LGA 1700 package and uhh mesh and gimped IPC that Intel sells as. But here is to hoping Intel wakes up and delivers such a fun CPU with 12 Raptor Cove P cores on ring bus worth consumer level IPC.

It would take less power than 13900K and 14900K as 4 e-cores take slightly more space than 1 P core and maybe a little more power too unless they really ramp up all core clocks. But as long as all core is 5GHz I am happy. It would beat the AMD Ryzen 7900X in productivity and smash it in gaming and be close to 7800X3D all around and better in the few and likely more upcoming games that can scale to lots of threads.
 
Last edited:
That idle power figure keeps geting bumped up for every post it's mentioned lol.

The 14600K uses 93 W on average in applications (TPU), and the 7600X uses 60 W, but you conveniently left out that part.

I'll never not be both amused and baffled by people obsessed with efficiency and power usage costs. In what sort of financial situation would one have enough money for new PC hardware yet they need to count every watt/hour consumed to minimize costs, very bizarre.
 
Back
Top