• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

Intel "Skylake" Die Layout Detailed

So if you had an 8-core, 16-thread processor, what would you do on a regular basis--that is time sensitive--that pushes it over 50% CPU load?


The bottleneck for most consumers isn't CPU but HDD or internet performance. The bottleneck for gamers is the hardware found in the PlayStation 4 and Xbox One tied to how ridiculous of a monitor(s) they buy (e.g. a 4K monitor is going to put a lot more stress on the hardware than a 1080p monitor).
 
Customers have no choice. They just buy what they know and the propaganda machine works and tells them - forget AMD, buy Intel. And because intel has the cash to keep that machine working, it simply still works for them. We will se until when. :)
That's not the point. There simply are limitation to what more cores can do. Not every application can fully utilize a quad core, not because they're not multi-threaded applications but because there is too much locking going on to actually realize that much CPU compute. Intel isn't pushing cores ahead because it costs money for very little gain for the average consumer.

So if you had an 8-core, 16-thread processor, what would you do on a regular basis--that is time sensitive--that pushes it over 50% CPU load?


The bottleneck for most consumers isn't CPU but HDD or internet performance. The bottleneck for gamers is the hardware found in the PlayStation 4 and Xbox One tied to how ridiculous of a monitor(s) they buy.
Not just that but actually utilize all those cores. Most applications run well more than 4 threads, it's just that more often than not, thread have to block for input or for another thread to release resources that need to be thread safe. More cores is great for servers but I see absolutely no justification for it on a typical consumer platform. I agree with you on this point if it wasn't obvious.
 
More cores is great for servers...
Only if the server can utilize those extra resources. That's my point in asking that rhetorical question.

I think the only thing on the horizon that changes from the quad-core-is-enough consumer paradigm is virtual reality.
 
I think the only thing on the horizon that changes from the quad-core-is-enough consumer paradigm is virtual reality.
How is that not GPU compute? Excluding VR, high resolution puts more strain on the GPU compute than CPU compute. I wouldn't call that an argument for more CPU power however I think that just validates our points that there isn't a whole lot of purpose to more cores for your average consumer.
 
The VR system has to not only compute the virtual environment but also the real environment. For example, the system Valve is playing with uses LADAR to make sure the user doesn't crash into anything in the physical realm. As VR improves, it could even incorporate real objects into the fictional world compounding the amount of CPU resources needed.
 
The VR system has to not only compute the virtual environment but also the real environment. For example, the system Valve is playing with uses LADAR to make sure the user doesn't crash into anything in the physical realm. As VR improves, it could even incorporate real objects into the fictional world compounding the amount of CPU resources needed.
My point is that isn't VR computation done on the GPU much like 3D is before it's written to the screen? The LADAR stuff makes sense since it's essentially a pseudo-realtime thing but, the VR stuff feels like it would fall in the realm of GPU compute because of the nature of what it's doing to what's being rendered. Much like draw calls and AA, it's highly parallel. Either way, I still feel the need for more GPU power will continue to outpace the need for more CPU compute, at least for the time being.
 
As far as I know, the GPU would only take care of the game elements they take care of now (rendering and physics). The CPU is much better suited for the workloads that are unique to VR (like LADAR). These aren't highly parallel because other components have to respond to the new data at every pass.

Oh, sure. Pretty much all software out there that uses the GPU will "smoke them if you got them." CPU workloads are always more subdued by design because excessive CPU use is just...wasteful. GPU demand will always exceed CPU simply due to the nature what they do.

All I'm saying is we haven't seen the end of new workloads for CPUs. VR is just one example.
 
So if you had an 8-core, 16-thread processor, what would you do on a regular basis--that is time sensitive--that pushes it over 50% CPU load?


The bottleneck for most consumers isn't CPU but HDD or internet performance. The bottleneck for gamers is the hardware found in the PlayStation 4 and Xbox One tied to how ridiculous of a monitor(s) they buy (e.g. a 4K monitor is going to put a lot more stress on the hardware than a 1080p monitor).

I actually find myself disabling hyperthreading when I'm not working on this PC, gaming wise anything more than 4/6 cores is just pure epeen but since I often like to render mid-work I like to have the option to do that quickly to see what will come out in the end :)

Anything that renders pegs the cpu to 100% constantly, the program I use scales up to 64 threads, I could use more but 2p platforms are crap for multi-purpose computing that also requires single threaded high performance.
 
Back
Top