Many people truly just want the best regardless of who makes it. For the longest that has been Intel but they've been caught sleeping so Amd is on fire right now. And Intel doesn't have a true answer yet so it looks like they're just dumping anything and everything on the market just to stay relevent. No one's gonna go for that.
If the rumors of core counts greater than four on mainstream chips are true, it could be that more pins are required than are available on 1151. Or it's just Intel being Intel and sticking with their two-generation-per-socket strategty. It's lame either way; I was looking forward to replacing my 6600K with a Coffee Lake chip. Looks like I should be watching 7700K pricing instead.
There is a reason I have not upgraded since 2012, and just when I think it might be worth it, Intel lines it's pockets again by forcing consumers to by a whole new motherboard with it's new CPU. Ya, me thinks it's time to go full team RED soon....
yes and if you play at 120hz ULMB that ryzen is an unplayable trash. 138/114fps can be painful in fast pace FPSs and thats the OC 7700K.
frame drops in ULMB or VR are extremely annoying and game ruining. 1% is a lot of damn frames to be below 120hz. .1% is a far better metric.
The new intel HEDT would also be awful. I am expected for the 8700K for my main gaming rig. If i only had the money to get threadripper for my server too :/
Also what RAM are they using? A lot of games are highly affected by RAM I guess according to
eurogamer refer i just read.
I rock the golden 3200 14-14-14-32/34?
Never bothered to test a difference i just buy quality.
The proper 300-series chipsets, aka "CNL PCH" (Cannonlake Platform Controller Hub) will come way after the first wave of mobos with the Z370, and will exceed the Z370's features in certain ways.
So things will start with a Z370 chipset very similar to the Z270, but mandatory for the new CPUs. The real new chipsets will come early 2018 and will have a few improvements over the supposed flagship chipset.
That's the main issue, CoffeeLake will still be in LGA1151 socket and from HW pov everything on MB and socket is the same... This just looks like a greedy PUSH for costumers to upgrade.
Mind explaining Assimilator? Are you referring to increased power requirements of some of the Coffee Lake chips or chip and board layout? Interposers have power planes and decoupling capacitors on them. Power can be delivered to any part of the chip with just a via connecting to the proper plane.
yes and if you play at 120hz ULMB that ryzen is an unplayable trash. 138/114fps can be painful in fast pace FPSs and thats the OC 7700K.
frame drops in ULMB or VR are extremely annoying and game ruining. 1% is a lot of damn frames to be below 120hz. .1% is a far better metric.
The new intel HEDT would also be awful. I am expected for the 8700K for my main gaming rig. If i only had the money to get threadripper for my server too :/
Also what RAM are they using? A lot of games are highly affected by RAM I guess according to
eurogamer refer i just read.
I rock the golden 3200 14-14-14-32/34?
Never bothered to test a difference i just buy quality.
The current cache of games that take advantage of single core performance and games that use multiple cores better. Future games will still perform according to that differences. As long as Intel attempts to keep the Crown for single core gaming, people will have to compromise if they do a high-core AMD build and want to play a single core performance game. I dont think the new intel HEDT will as awful as you think. I'd rather see performance close or even similar only to have it boil down to a price war.
VR will definitely benefit from cores and cant afford to drop frames, not even .1%.
The current cache of games that take advantage of single core performance and games that use multiple cores better. Future games will still perform according to that differences. As long as Intel attempts to keep the Crown for single core gaming, people will have to compromise if they do a high-core AMD build and want to play a single core performance game. I dont think the new intel HEDT will as awful as you think. I'd rather see performance close or even similar only to have it boil down to a price war.
VR will definitely benefit from cores and cant afford to drop frames, not even .1%.
6 core at 5GHz will trash any HEDT system in gaming period. (4 core at 5GHz still beats all HEDTs expect in niche areas). The slide above shows that. Not even my source material.
Also sure future games will use more threads but thats 1, 2, 3, or 6 years form now and people have been saying this for the last 5 years and yet we keep saying this. Most games that are not AAA are still single thread and that doesn't count the 10,000 older games that are still 1 or 2 thread limited.
6 core at 5GHz will trash any HEDT system in gaming period. (4 core at 5GHz still beats all HEDTs expect in niche areas). The slide above shows that. Not even my source material.
Also sure future games will use more threads but thats 1, 2, 3, or 6 years form now and people have been saying this for the last 5 years and yet we keep saying this. Most games that are not AAA are still single thread and that doesn't count the 10,000 older games that are still 1 or 2 thread limited.
6 core at 5GHz will trash any HEDT system in gaming period. (4 core at 5GHz still beats all HEDTs expect in niche areas). The slide above shows that. Not even my source material.
Also sure future games will use more threads but thats 1, 2, 3, or 6 years form now and people have been saying this for the last 5 years and yet we keep saying this. Most games that are not AAA are still single thread and that doesn't count the 10,000 older games that are still 1 or 2 thread limited.
Multi-core gaming is here, its still a bit early and we might not see the full effects if it didnt take so damn long to develop a game to take advantage. Thats also why we dont see more newer games using multi-core, there is a rush to poop games out right and left, devs just dont have the time. There are a couple MMO's that have already switched to multi-core support, Rift comes to mind there. Not sure of any others that made such a jump.
I dont think any HEDT will beat any non-HEDT in gaming, that seems like common knowledge. Didnt think Intel could push 5ghz on all 6 cores, I see varying stories.
6 core at 5GHz will trash any HEDT system in gaming period. (4 core at 5GHz still beats all HEDTs expect in niche areas). The slide above shows that. Not even my source material.
Also sure future games will use more threads but thats 1, 2, 3, or 6 years form now and people have been saying this for the last 5 years and yet we keep saying this. Most games that are not AAA are still single thread and that doesn't count the 10,000 older games that are still 1 or 2 thread limited.
I guess Intel have made my decision for me - if I need a new motherboard to get a CPU with more than quad-core on Z270 - then I just jump ship to AMD while I am at it.
If this IS true, this is the end for me and Intel. LGA1151 V2? WTF is that? It is Intel profiteering from what was a loyal gaming fan-base, is what it is.
AMD could not have asked for a better incentive from Intel if they blackmailed them.
I am sure Intel has a great excuse lined up - and I am sure that this is just them testing the waters before they finally decide we are dumb enough to wear it and make it official.
This should be required reading for everyone because for some reason 35 years later people still don't understand this and this has been known since 1982! This is again why I don't use my Server as my main rig. It is substantially slower in day to day tasks and I can feel it.
Multi-core gaming is here, its still a bit early and we might not see the full effects if it didnt take so damn long to develop a game to take advantage. Thats also why we dont see more newer games using multi-core, there is a rush to poop games out right and left, devs just dont have the time. There are a couple MMO's that have already switched to multi-core support, Rift comes to mind there. Not sure of any others that made such a jump.
I dont think any HEDT will beat any non-HEDT in gaming, that seems like common knowledge. Didnt think Intel could push 5ghz on all 6 cores, I see varying stories.
War Thunder is still single thread limited and so is Planetside 2. PS2 is 100% impossible even on a 4.8GHz 6700K to get near 120hz. War Thunder still has many sags and dips due to CPU limits even on my 6700K at 4.8GHz. The fact is if it isn't AAA game it is single thread or some shotty 1.5 core threading. (main thread with network and other stuff on second core.) NS2 is single thread limited too like any older game. I'll take a fast single thread rig any day of the year. Most indy games i know are single thread too. Again if it isn't AAA I would be shocked if it properly supports 4 cores let alone 8+.
If i only played AAA sure I would grab an HEDT but regular day to day computing and 99% of all games are single thread and many old games are still horribly limited via single thread which is why i have the system i have.
This comes from someone who has always had one of the fastest single thread systems available and an HEDT server next to him to compare and constantly runs monitoring software to watch for single thread limits and I can barely think of a few programs that i use that are actually threaded. I would love is this was not a fact but it is hence why I pay out of my ass for an SL chip with freakishly expensive RAM. (3200 14-14-14-34). This is newer RAM which is better option. Approximately same latency and way better freqs.
You can get 5.2GHz 7700Ks and 14nm++ is supposed to be better so 4 core should see another 400 MHz according to historical trends and intel statements so 6 core should reach 5GHz+ at the top end even with 2 extra cores.
hardly you just dont understand how most programs work and how much things are still single thread dependent. Compare my computer vs a ryzen in browsing and mine beats the crap out of it. Compare my system in OCR, PDF, office, and any none threaded program (aka nearly everything).
Most of Wiondows 7 is single thread too. Win10 has made some small improvements on threadeding the OS on how windows (screens like explorer) load but much is still not.
I would love to get a 8700K for main rig and a 16 core threadripper for server because i could use the extra cores for zipping and rippings and the extra PCIe lanes. But I cant afford the upgrade TT
I can confirm that CL won't work on current chipsets (or I guess). It's been known for a while - all existing leaks/screenshots are either on Z370 or fake.
This should be required reading for everyone because for some reason 35 years later people still don't understand this and this has been known since 1982! This is again why I don't use my Server as my main rig. It is substantially slower in day to day tasks and I can feel it.
War Thunder is still single thread limited and so is Planetside 2. PS2 is 100% impossible even on a 4.8GHz 6700K to get near 120hz. War Thunder still has many sags and dips due to CPU limits even on my 6700K at 4.8GHz. The fact is if it isn't AAA game it is single thread or some shotty 1.5 core threading. (main thread with network and other stuff on second core.) NS2 is single thread limited too like any older game. I'll take a fast single thread rig any day of the year. Most indy games i know are single thread too. Again if it isn't AAA I would be shocked if it properly supports 4 cores let alone 8+.
If i only played AAA sure I would grab an HEDT but regular day to day computing and 99% of all games are single thread and many old games are still horribly limited via single thread which is why i have the system i have.
This comes from someone who has always had one of the fastest single thread systems available and an HEDT server next to him to compare and constantly runs monitoring software to watch for single thread limits and I can barely think of a few programs that i use that are actually threaded. I would love is this was not a fact but it is hence why I pay out of my ass for an SL chip with freakishly expensive RAM. (3200 14-14-14-34). This is newer RAM which is better option. Approximately same latency and way better freqs.
You can get 5.2GHz 7700Ks and 14nm++ is supposed to be better so 4 core should see another 400 MHz according to historical trends and intel statements so 6 core should reach 5GHz+ at the top end even with 2 extra cores.
hardly you just dont understand how most programs work and how much things are still single thread dependent. Compare my computer vs a ryzen in browsing and mine beats the crap out of it. Compare my system in OCR, PDF, office, and any none threaded program (aka nearly everything).
Most of Wiondows 7 is single thread too. Win10 has made some small improvements on threadeding the OS on how windows (screens like explorer) load but much is still not.
I would love to get a 8700K for main rig and a 16 core threadripper for server because i could use the extra cores for zipping and rippings and the extra PCIe lanes. But I cant afford the upgrade TT
Was there a specific point in the blog you want to convey to everyone, or are you just trying to tell people that if they dont know what a mainframe they are stupid? Most people cant afford or would use separate server / gaming rigs (not including consoles), so I wont waste my time discussing that point.
Note:The fact is 90% (not 99%) of indie games are shit, usually solo programmers who dont have the expertise, but try anyway, to fully realize a game, prior to the beginning of coding, and not knowing what makes it good. Given the current amount of indie games that have over-flooded the market, I'd say, ya, that 99% number does reflect "all games" using single threads, so it isnt too far off.
Was there a specific point in the blog you want to convey to everyone, or are you just trying to tell people that if they dont know what a mainframe they are stupid? Most people cant afford or would use separate server / gaming rigs (not including consoles), so I wont waste my time discussing that point.
Note:The fact is 90% (not 99%) of indie games are shit, usually solo programmers who dont have the expertise, but try anyway, to fully realize a game, prior to the beginning of coding, and not knowing what makes it good. Given the current amount of indie games that have over-flooded the market, I'd say, ya, that 99% number does reflect "all games" using single threads, so it isnt too far off.
I was referring to indie as in like anything not major developer. but most major dev games are still single thread limited except major AAA games....basically anything not what these places reviews
The study was showing user productivity in regards to response times. It has nothing to do with mainframes.
Windows 7 fade animation is 250ms and it feels painfully slow. I turn off windows fade animation and windows load in 30-50ms. This is a night and day difference in responsiveness. Those speeds are below what IBM even studied...granted 300ms was huge so they never bothered to try lower....this was 35 years ago.
Your action per minute skyrocket the faster your system is as the IBM study showed. It appears to be an exponential increase.
300ms is not the limit. removing windows 7 fade animation on windows allowed 30-50ms response when opening and closing windows.
This applies to anything, web browsing, OS, anything that you click and do.
This is why I use the system I have and remove BS animations because they actually hurt your ability to complete tasks.
I also set my android phone to run .5x animation times so its twice as responsive and its soooooo much better.
Web browsing on my 7500U vs 1650v3 vs my 6700K is hugely different and I really loath my 7500U and get still annoyed on my server because how much slower pages load and how less responsive it is.
It is like CRT vs LCD or ULMB LCD vs LCD. Once you see it...feel it....notice it....you never go back.
Broad Applicability
The studies described up to this point involved scientists, engineers, and programmers. A test conducted with administrative professionals indicates that the same benefits can be realized with sub-second response time in data base applications. Component forecasters at IBM's Poughkeepsie facility make frequent reference to an online data base when estimating requirements for electronic parts. The work involves the maintenance of part inventories, bills of materials, and timetables of production and delivery, all tasks similar to those handled by production planners in many organizations.
Five component forecasters were provided subsecond response time for a half-day experiment during which their transaction rate productivity was measured. In their normal working environment they had a system response time of five or more seconds and an average individual productivity rate of 99 transactions per hour. During the test they worked at an average of 336 transactions per hour, a productivity increase of 339%.
So like SAP are prime examples of this. If you work for a company and their SAP is slower than 300ms response times they are pissing a ton of money away in user productivity.
This applies to anything but SAP is a great example of how this works and how annoying this is,
You can see expert users or people who know exactly what they are doing are most hurt by this delay but even novice are badly hurt by delay.
That's really interesting. My job isn't dependent on action throughput for productivity, but I did notice that it took me quite a bit longer to edit a spreadsheet when we upgraded to Office 2016/365, particularly on an older laptop where the additional animation introduced a perceptible lag to many input actions. It's much better with those animations turned off.
That's really interesting. My job isn't dependent on action throughput for productivity, but I did notice that it took me quite a bit longer to edit a spreadsheet when we upgraded to Office 2016/365, particularly on an older laptop where the additional animation introduced a perceptible lag to many input actions. It's much better with those animations turned off.
This applies to anything even windows 7 windows opening and closing and web browsing. Now different tasks will have different user response delay so its not a 100% scale. But if you log how many actions (clicks, moves, or whatever) per hour you can get a feeling about how much of an improvement might happen if you reduce system delay*.
The study above showed like 1-2 seconds saved per action going from 600ms-300ms so you probably could save another 1-2 seconds per action with getting 100ms responses, which is why i removed windows fade/transitions animations and turned my phone to .5x animation timing. I would set it to 0 animations but certain tasks require animations. Ever use tinder or bumble in swiping? 0 animations results in did i swipe left or right? I have no idea. picture just vanished! roflmao
*system delay: This counts mouse input lag (sample rating, input lag, and more), OS delay, display input lag, display hz and so on.
going from a 125 hz mouse to a 1000hz mouse saves you 7ms
going to 120hz vs 60hz (may save you 8ms give or take)
a monitor with less input lag can save 10-50ms
removing windows 7 animations can save 50-220ms (no idea about win 10...i only use it on laptop and the amount of pointless time wasting animations is rage inducing.)
getting a 5.2Ghz CPU vs 4GHz cpu can easily save in certain tasks 25% of your time. (this is largely thanks to shotty programing. my OCR on my Fujitsu document scanner is single thread. #!$&!#$&#$& Why!?!?!)
and the list goes on from simple optimizations you can do that are not horribly expensive but can add up if you go to the extreme like me
The difference from my desktop vs laptop is horrifyingly annoying.