• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

Intel's Core i7-8700K Generational Successor Could be 8-core

HTC

Joined
Apr 1, 2008
Messages
4,637 (0.78/day)
Location
Portugal
System Name HTC's System
Processor Ryzen 5 5800X3D
Motherboard Asrock Taichi X370
Cooling NH-C14, with the AM4 mounting kit
Memory G.Skill Kit 16GB DDR4 F4 - 3200 C16D - 16 GTZB
Video Card(s) Sapphire Pulse 6600 8 GB
Storage 1 Samsung NVMe 960 EVO 250 GB + 1 3.5" Seagate IronWolf Pro 6TB 7200RPM 256MB SATA III
Display(s) LG 27UD58
Case Fractal Design Define R6 USB-C
Audio Device(s) Onboard
Power Supply Corsair TX 850M 80+ Gold
Mouse Razer Deathadder Elite
Software Ubuntu 20.04.6 LTS
It's likely that everyone is going to have problems with the next wave of node shrinks. But something definitely ain't right in particular with Intel's 10nm node , they've had to many problems with it and for too long.

I have a feeling they will eventually have to resort to other sources to manufacture their chips like pretty much everyone else.

That's because, by reducing the size of the node, there's quite a big leap in complexity, which is more on the exponential side rather then additive one. Intel already had very big problems when transitioning from 22 to 14 nm.

Here's a video by AdoredTV that explains the problem Intel / AMD face when going for a smaller node:


The video highlights why a big die size is worse VS several smaller die sizes but also explains the problems inherent to adopting smaller nodes, which is why i linked it here.

We've yet to see if / how 7 nm is being affected but i wouldn't be surprised if it too had problems: watch he video and you'll understand why.
 
Joined
Feb 13, 2012
Messages
522 (0.11/day)
It's likely that everyone is going to have problems with the next wave of node shrinks. But something definitely ain't right in particular with Intel's 10nm node , they've had to many problems with it and for too long.

I have a feeling they will eventually have to resort to other sources to manufacture their chips like pretty much everyone else.
I think its simply that intel shot themselves in the foot by clocking their stock cpus way too high on their excellent and thoroughly refined 14nm process that their 10nm is having trouble sustaining that. Its important to note also that these frequencies were familiar to us even in the 32nm days with overclocking. So the clock headroom didn't change much, just the lower power and higher density(smaller chips) became possible. With that being said, moving to 10nm made no financial sense as the transition cost didn't justify the savings of having smaller chips when performance and market position was going to stay the same. Moving forward there are 3 options for added performance:
1- increase IPC
2- increase core count at a given footprint
3- lower power usage at a given footprint/specification
 

bug

Joined
May 22, 2015
Messages
13,489 (4.02/day)
Processor Intel i5-12600k
Motherboard Asus H670 TUF
Cooling Arctic Freezer 34
Memory 2x16GB DDR4 3600 G.Skill Ripjaws V
Video Card(s) EVGA GTX 1060 SC
Storage 500GB Samsung 970 EVO, 500GB Samsung 850 EVO, 1TB Crucial MX300 and 2TB Crucial MX500
Display(s) Dell U3219Q + HP ZR24w
Case Raijintek Thetis
Audio Device(s) Audioquest Dragonfly Red :D
Power Supply Seasonic 620W M12
Mouse Logitech G502 Proteus Core
Keyboard G.Skill KM780R
Software Arch Linux + Win10
That's because, by reducing the size of the node, there's quite a big leap in complexity, which is more on the exponential side rather then additive one. Intel already had very big problems when transitioning from 22 to 14 nm.

While that's true, let's not forget TSMC also had issues with 22nm. To the point they scraped the whole node. So yeah, as we approach the transistor's physical limits, things are getting increasingly nasty.
 

HTC

Joined
Apr 1, 2008
Messages
4,637 (0.78/day)
Location
Portugal
System Name HTC's System
Processor Ryzen 5 5800X3D
Motherboard Asrock Taichi X370
Cooling NH-C14, with the AM4 mounting kit
Memory G.Skill Kit 16GB DDR4 F4 - 3200 C16D - 16 GTZB
Video Card(s) Sapphire Pulse 6600 8 GB
Storage 1 Samsung NVMe 960 EVO 250 GB + 1 3.5" Seagate IronWolf Pro 6TB 7200RPM 256MB SATA III
Display(s) LG 27UD58
Case Fractal Design Define R6 USB-C
Audio Device(s) Onboard
Power Supply Corsair TX 850M 80+ Gold
Mouse Razer Deathadder Elite
Software Ubuntu 20.04.6 LTS
While that's true, let's not forget TSMC also had issues with 22nm. To the point they scraped the whole node. So yeah, as we approach the transistor's physical limits, things are getting increasingly nasty.

This is taken from the video i posted:

Screenshot from 2018-04-23 09-35-08.jpg

This is demonstrating the defect density in 14 nm node from Samsung (green) and GlobalFoundries (orange). Over time, they both reach around 0.1 defects per square cm.

In the beginning of each node, defect density is always higher because the process is just starting. As process matures, defect density gets progressively lower to more "acceptable levels" and this makes the amount of usable chips allot higher, which in turn gives "more chances for golden samples" per wafer that Intel / AMD can sell as their top chips.

I'm not too familiar with how Intel does it but for AMD's RyZen, the top Zen dies go for Epyc chips while those that "don't make the Epyc cut" go for ThreadRipper: only those that "don't make the cut for both Epyc and ThreadRipper" go for "normal" RyZen chips, and that's about 90+% of all chips. A lower defect density will increase the chances to have golden samples per wafer, thus enabling more of those very expensive CPUs. I assume Intel has it's version of the same thing.

The problem is that we know that Intel's 10 nm is having very serious yield issues (and / or other problems). Dunno the actual size of Ice Lake so you'll have to use Coffe Lake's size to test: then, and playing around with the defect density, you'll see how much "goes to waste". Remember: the smaller the node, the more chances there are for a defect to kill the chip, as opposed to just damaging a part of it, thus being salvageable for "lesser potent" chip.
 
Last edited:

bug

Joined
May 22, 2015
Messages
13,489 (4.02/day)
Processor Intel i5-12600k
Motherboard Asus H670 TUF
Cooling Arctic Freezer 34
Memory 2x16GB DDR4 3600 G.Skill Ripjaws V
Video Card(s) EVGA GTX 1060 SC
Storage 500GB Samsung 970 EVO, 500GB Samsung 850 EVO, 1TB Crucial MX300 and 2TB Crucial MX500
Display(s) Dell U3219Q + HP ZR24w
Case Raijintek Thetis
Audio Device(s) Audioquest Dragonfly Red :D
Power Supply Seasonic 620W M12
Mouse Logitech G502 Proteus Core
Keyboard G.Skill KM780R
Software Arch Linux + Win10
This is taken from the video i posted:

View attachment 100139

This is demonstrating the defect density in 14 nm node from Samsung (green) and GlobalFoundries (orange). Over time, they both reach around 0.1 defects per square cm.

In the beginning of each node, defect density is always higher because the process is just starting. As process matures, defect density gets progressively lower to more "acceptable levels" and this makes the amount of usable chips allot higher, which in turn gives "more chances for golden samples" per wafer that Intel / AMD can sell as their top chips.

I'm not too familiar with how Intel does it but for AMD's RyZen, the top Zen dies go for Epyc chips while those that "don't make the Epyc cut" go for ThreadRipper: only those that "don't make the cut for both Epyc and ThreadRipper" go for "normal" RyZen chips, and that's about 90+% of all chips. A lower defect density will increase the chances to have golden samples per wafer, thus enabling more of those very expensive CPUs. I assume Intel has it's version of the same thing.

The problem is that we know that Intel's 10 nm is having very serious yield issues (and / or other problems). Dunno the actual size of Ice Lake so you'll have to use Coffe Lake's size to test: then, and playing around with the defect density, you'll see how much "goes to waste". Remember: the smaller the node, the more chances there are for a defect to kill the chip, as opposed to just damaging a part of it, thus being salvageable for "lesser potent" chip.
I'm guessing you're trying to say something here, but I don't follow.
 

HTC

Joined
Apr 1, 2008
Messages
4,637 (0.78/day)
Location
Portugal
System Name HTC's System
Processor Ryzen 5 5800X3D
Motherboard Asrock Taichi X370
Cooling NH-C14, with the AM4 mounting kit
Memory G.Skill Kit 16GB DDR4 F4 - 3200 C16D - 16 GTZB
Video Card(s) Sapphire Pulse 6600 8 GB
Storage 1 Samsung NVMe 960 EVO 250 GB + 1 3.5" Seagate IronWolf Pro 6TB 7200RPM 256MB SATA III
Display(s) LG 27UD58
Case Fractal Design Define R6 USB-C
Audio Device(s) Onboard
Power Supply Corsair TX 850M 80+ Gold
Mouse Razer Deathadder Elite
Software Ubuntu 20.04.6 LTS
I'm guessing you're trying to say something here, but I don't follow.

The yield problems Intel is having are so high that it renders the cost prohibitive, and that's if yields are the only issue which, apparently, are not.

Remember: Intel is a massive company. For the cost to be prohibitive for them, something serious must be happening.
 

bug

Joined
May 22, 2015
Messages
13,489 (4.02/day)
Processor Intel i5-12600k
Motherboard Asus H670 TUF
Cooling Arctic Freezer 34
Memory 2x16GB DDR4 3600 G.Skill Ripjaws V
Video Card(s) EVGA GTX 1060 SC
Storage 500GB Samsung 970 EVO, 500GB Samsung 850 EVO, 1TB Crucial MX300 and 2TB Crucial MX500
Display(s) Dell U3219Q + HP ZR24w
Case Raijintek Thetis
Audio Device(s) Audioquest Dragonfly Red :D
Power Supply Seasonic 620W M12
Mouse Logitech G502 Proteus Core
Keyboard G.Skill KM780R
Software Arch Linux + Win10
The yield problems Intel is having are so high that it renders the cost prohibitive, and that's if yields are the only issue which, apparently, are not.

Remember: Intel is a massive company. For the cost to be prohibitive for them, something serious must be happening.
I doubt you know that for sure (your apparent argument hinges on cost, but throwing money at a problem is not a surefire way of fixing it). Yes, there are problems. But they're not particular to Intel or 10nm.
 
Joined
Jun 10, 2014
Messages
2,919 (0.79/day)
Processor AMD Ryzen 9 5900X ||| Intel Core i7-3930K
Motherboard ASUS ProArt B550-CREATOR ||| Asus P9X79 WS
Cooling Noctua NH-U14S ||| Be Quiet Pure Rock
Memory Crucial 2 x 16 GB 3200 MHz ||| Corsair 8 x 8 GB 1333 MHz
Video Card(s) MSI GTX 1060 3GB ||| MSI GTX 680 4GB
Storage Samsung 970 PRO 512 GB + 1 TB ||| Intel 545s 512 GB + 256 GB
Display(s) Asus ROG Swift PG278QR 27" ||| Eizo EV2416W 24"
Case Fractal Design Define 7 XL x 2
Audio Device(s) Cambridge Audio DacMagic Plus
Power Supply Seasonic Focus PX-850 x 2
Mouse Razer Abyssus
Keyboard CM Storm QuickFire XT
Software Ubuntu
I saw their full X models are usually one-gen behind mainstream Z chipsets. So this cascade is some sort of Kabby/CoffeLake hybrid. Next year IceLake X, and then year after that TigerLake X, while normal Z gets hmm? this Sapphire Rapids
The HEDT platforms usually lags nearly one "cycle" behind, but do not think that makes them outdated or inferior. The HEDT platforms are more advanced CPUs and chipsets, and even though they are in the same "family" and share many design features, they do offer different memory controllers, double memory bandwidth, better core interconnect, different cache, etc. So far the HEDT platforms have not lagged behind in terms of IPC, and they really shine during heavy multitasking etc.

The advantages of the mainstream platform is slightly higher clocks, lower TDP and of course price. You really shouldn't worry about missing out on any performance. When it comes to workloads which needs more cores, HEDT scales better. You should think about which platform fits your needs, whatever they might be.

Or more reason to buy AMD which is to thank for all of this. As it stands right now with Zen+, both AMD and Intel are practically on parity with IPC
Not even close.

It takes 4 years to design a nedd x86 architecture and produce silicon. It takes much less to tape out new chips based on existing IP. In intels case the 6core cpu wouldnt have taken more than 6-12 month of tape-out and validation, and that project probably began the moment intel began seeing AMDs initial zen numbers and expectations. Another thing is pricing. Had zen not been around. An 8700k with 6 cores would've been north of 500-600$, and clocks would've probably kept more conservative had AMD been still on bulldozer.
There were multiple sources pointing to 6-core versions of Skylake/Coffee Lake back in 2016 and before, and the designs were already done and taped out at that point.

Intel can't just slap two more cores on the die, they have to redesign the interconnect, memory controller and cache to make it work. All the different core configurations are designed from the beginning with each architecture.
 

bug

Joined
May 22, 2015
Messages
13,489 (4.02/day)
Processor Intel i5-12600k
Motherboard Asus H670 TUF
Cooling Arctic Freezer 34
Memory 2x16GB DDR4 3600 G.Skill Ripjaws V
Video Card(s) EVGA GTX 1060 SC
Storage 500GB Samsung 970 EVO, 500GB Samsung 850 EVO, 1TB Crucial MX300 and 2TB Crucial MX500
Display(s) Dell U3219Q + HP ZR24w
Case Raijintek Thetis
Audio Device(s) Audioquest Dragonfly Red :D
Power Supply Seasonic 620W M12
Mouse Logitech G502 Proteus Core
Keyboard G.Skill KM780R
Software Arch Linux + Win10
@efikkan HEDT is indeed on iteration behind, from an architecture pov, but that is only because pushing the design so far requires much tighter binning and a mature manufacturing process. As you said, this architectural lag is more than made up for by other features of the platform.

That said, I still don't think I'll ever pay for HEDT. But it's nice to read about stuff pushed to the extremes.
 
Joined
Feb 13, 2012
Messages
522 (0.11/day)
Not even close.
perfrel_cpu.png

Not even close you say? Take a look at the 6 core 2600x that way its more an apples to apples comparison. Its about 12% behind 8700k in relative performance while having the same number of cores and threads. It does also have 500mhz lower clockspeed headroom which translates to a bit over 10%. If we neutralize these numbers you end up with about a 2-3% difference. Of course this is an oversimplified look on numbers, however; even if we went by the 12% number, that is still pretty close. So i guess the real question I need to ask you; what does "not even close" mean to you?

Intel can't just slap two more cores on the die, they have to redesign the interconnect, memory controller and cache to make it work. All the different core configurations are designed from the beginning with each architecture.
More like tweak and refine. The original post i was responding to quoted a 4 years figure, which is only true for designing an x86 architecture from scratch, and thats for the x86 core IP design. Once that IP is ready, it is then used as part of the lego like blocks of IP into building SOCs. Building an SOC from current existing IP generally takes 6-12 month, and 8700k doesnt introduce any new fundamental IP that 7700k didn't have. How do you think mobile socs come out with new chipsets every year and only shortly after arm even complete their new cpu core designs/ip.

All the different core configurations are designed from the beginning with each architecture.
True but irrelevant to our discussion because Intel hasn't introduced a new architecture since skylake, so nothing fundamentally changed since then. As for the memory controller and cache you speak of; it has been tweaks and refinements as the process progressed.


regardless, whether a 6core chip took 1 year or 3, the fact is that it was designed in response (or anticipation) to AMDs new architecture. Think of the last time Intel made a big move. Remember sandy bridge? It was released around the time Bulldozer was supposed to be out. And when bulldozer sucked, intel stagnated and kept us with overpriced 400$+ 4core i7s, which became the new i3s after zen lol. And now intel is touting an 8core mainstream cpu only one year after they finaly moved to 6 cores. So It took 10 years to move from 4 to 6 cores, And now less than 2 years for them move to 8. Again its all thanks to AMD (or good competition in general), its just you're typical free market at play whether you like AMD or not
 
Joined
Jun 10, 2014
Messages
2,919 (0.79/day)
Processor AMD Ryzen 9 5900X ||| Intel Core i7-3930K
Motherboard ASUS ProArt B550-CREATOR ||| Asus P9X79 WS
Cooling Noctua NH-U14S ||| Be Quiet Pure Rock
Memory Crucial 2 x 16 GB 3200 MHz ||| Corsair 8 x 8 GB 1333 MHz
Video Card(s) MSI GTX 1060 3GB ||| MSI GTX 680 4GB
Storage Samsung 970 PRO 512 GB + 1 TB ||| Intel 545s 512 GB + 256 GB
Display(s) Asus ROG Swift PG278QR 27" ||| Eizo EV2416W 24"
Case Fractal Design Define 7 XL x 2
Audio Device(s) Cambridge Audio DacMagic Plus
Power Supply Seasonic Focus PX-850 x 2
Mouse Razer Abyssus
Keyboard CM Storm QuickFire XT
Software Ubuntu
100161
Not even close you say? … So i guess the real question I need to ask you; what does "not even close" mean to you?
You were talking about IPC, and now you show a graph of total performance. Ryzen 7 2700X needs two more cores and boost beyond thermal specifications to come close to the "average performance" of i7-8700K. So, if you knew what IPC were, you'd know AMD have a long way to go to be on par with Intel on IPC.

More like tweak and refine. The original post i was responding to quoted a 4 years figure, which is only true for designing an x86 architecture from scratch, and thats for the x86 core IP design. Once that IP is ready, it is then used as part of the lego like blocks of IP into building SOCs. Building an SOC from current existing IP generally takes 6-12 month, and 8700k doesnt introduce any new fundamental IP that 7700k didn't have. How do you think mobile socs come out with new chipsets every year and only shortly after arm even complete their new cpu core designs/ip.
The time from tapeout to the first engineering sample chip is usually ~4 months for such large dies, then 1-3 cycles of tweaking and waiting is normal before they finally ramp up volume production. This is why the time from a completed design to retail availability is usually 12-15 months, this excludes the time developing the layout of the chip, which anyone knows, takes several months or more on top of that. The development times of GPUs are similar; Vega and Polaris were all 12-14 months. Even the Pascal line had similar timelines, with GP102, GP104 and GP106 taking about 12 months from tapeout to launch, even though they were all just "cut down" versions of GP100.

Vendors are able to release new designs every year because they have multiple products "in flight" at the same time. This is true for desktop CPUs, GPUs and even smaller ARM CPUs/SOCs. By the time the public hear about ARM finishing their next design, their partners have already been participating for 2-3 years in the process of developing it.
Right now Intel are sampling Ice Lake, doing final design of Tiger Lake, designing and simulating Sapphire Rapids and probably planning the next unknown one.
We know AMD have several Zen iterations in progress, and have started on designing "Zen5".

regardless, whether a 6core chip took 1 year or 3, the fact is that it was designed in response (or anticipation) to AMDs new architecture. Think of the last time Intel made a big move. Remember sandy bridge? It was released around the time Bulldozer was supposed to be out. And when bulldozer sucked, intel stagnated and kept us with overpriced 400$+ 4core i7s, which became the new i3s after zen lol. And now intel is touting an 8core mainstream cpu only one year after they finaly moved to 6 cores. So It took 10 years to move from 4 to 6 cores, And now less than 2 years for them move to 8. Again its all thanks to AMD (or good competition in general), its just you're typical free market at play whether you like AMD or not
The fact police has to intervene again; Haswell-E did lower the price of it's six-core i7-5820K, which relatively speaking is one of the greatest price per performance drops we've seen in the recent decade, and this without any real competition from AMD. Haswell-E did also introduce 8-cores to the consumer market, and Broadwell-E introduced 10-cores (at a hefty price). So claiming that Intel has stagnated is blatantly untrue.

You lack a basic understanding of how competition works in technology. Doing any changes to the design takes 1-2 product cycles, so the vendors are limited to the following responses in the short term:
- Change price
- Disable/enable built-in features
- Realign binning and move products between lineups; e.g. make a consumer version of an enterprise product.
- Tweak clocks (by a few percent)

These are simple facts you need to accept.

That said, I still don't think I'll ever pay for HEDT. But it's nice to read about stuff pushed to the extremes.
Sure, HEDT is certainly not for everyone. And the mainstream lineups offering good 6-cores should provide enough for gamers for several years ahead.

HEDT does however make sense for productive work with one or more of the following requirements:
- More cores
- More memory capacity and/or bandwidth
- Multiple GPUs (for compute)
- Virtualization
etc.

Another aspect which many forget is their upgrade cycle; if they keep upgrading whenever they run into bottlenecks, stepping up to HEDT may result in their machine "lasting" 50-100% longer, which may make it cheaper in the long run (if this applies to them). Buying a new system after ~3 years will often result in replacing all the memory as well, so having longer upgrade cycles can make a difference in total cost.

But ultimately we always have to respect the individual's right to make a decision, even if it's sub-optimal.

Speaking for myself, going HEDT turned out to be a great choice. My 5.5 year old workstation (i7-3930K, 64 GB ram) has been pushed to its limits. In fact, at times I have to resort to multitasking across both this one and my "test machine" (i5-4690K), to offload some web-browsing etc. because I'm running out of RAM. I'm now in the process of evaluating what my requirements for the next 5-6 years…
 
Joined
Mar 18, 2015
Messages
2,960 (0.87/day)
Location
Long Island
Even if it's only mid-April, I think I found myself the idiocy of the year.
But please, do tell what is pro-competition in your opinion?

Congratulations on the new title. I'm imagining a corporate boardroom where they are going around the table in brainstorming session for new ideas and one exec says "Hey, ya know what ... "why don't we make it easier for our competitor's to gain market share ?" That guy / gal willl ponder that statement each time he sitting by the phone waiting for a call from the employment agency. Cause that's every corporations missions statement includes "make more money ... increase market share". Gaining market share is a zero sum game, what you gain, takes away market share from someone else. Step into the real world .... there's no koom-bay-ya moments in the boardroom.... it's dog eat dog.

From the other side.. being competitive is building a product that can actually compete.... Innovation goes beyond using brand names that are designed to mimic the competition's products hoping to bleed off market share by customer confusion.... leader behavior is distinguishing yourself from the competition .... for example X370 is clearly intended to mimic market leader Z370. Ryzen 7 and 5 are clearly intended to momic intel's i5 and i7. Market leaders claim their own path, runner ups mimic. Now AMD is whining about not being able to do the same in the GFX card arena. MSI, if they chose to, could easily slap their coolers on reference cards from both sides and still use them on any nimber of lines of cards. But if they want to partner with nVidia and be provided with certain nVidia owned enhancements, their our conditions.

Is McDonalds being anti-competitive by refusing to let Burger King sell "Big Macs" and "Quarter Pounders" ? Is Intel being anti-competitive by not letting AMD system builders include "Coffee Lake" in their product description ? Is Apple being anti-competitive by not letting competitors call their product the i-Phone ?

As I have said before ... when the local pizza shop accepted a free "Coca Cola" branded fridge from the Coca Cola distributor, the pizza guy accepts that he is NOT permitted to store Pepsi products in that fridge ... is that anti-competitive ? The shop coulda bought their own fridge ... but thgey made a deal that's mutually beneficial to both sides. The shop eventually got a 2nd fridge from the Pepsi distributor. So please enlighten us ... how it is in any way ant-competitive for the soda companies to require that Sprite can't be kept in the Pepsi branded fridge and 7-Up can't be stored on the Coca Cola branded fridge ?

No one is twisting the AIB's arms requiring them to become a partner. It's an offer ... you want us to provide enhancements that will help your product distinguish itself in the market place ? ... than these are the terms. This is our product, this is our branding. If you want these enhancements, then we'll "give you the free fridge" so to speak as our partner ... but we are NOT going to let you put our competitors product "in our fridge". No evil empire concoction that someone just came up with, just common business practice that's been employed since Model T's were pulling into gasoline stations and grabbing a soda out of branded ice boxes.

It's not as if Intel doesn't have these designs sitting on the shelves. Same w/ nVidia. It's not as if when AMD came out with their 2xx series, they said oh crap and started developing the 780 Ti. It was widely speculated based upon leaked specs that when nVidia saw what the AMD line was going to be about, they shifted ther linup.... the 770 became the 780 because the 770 was going to be faster than AMDs top card. The selling price of the top card had hovered around $700 for over a dozen years so the 780 was shelved because they could maintain market position as top dog with the much cheaper to produce 770, now relabled the 780. Then when the 290x was coming out, nVidia didn't suddenly say, let's hit the boards and design a faster card, the dusted off the old 780 design and put it into production and were selling it 5 days later as the 780 Ti. They continue that practice today of holding off the top card in a series until AMD comes out with something that gets close.

With CPUs, AMD hasn't been able to match core speeds, so they took the reasonable approach of selling us on more cores. And while that is useful in specialized apps like rendering and video editing, it doesn't help in gaming ... it doesn't help in AutoCAD ... but that hasn't stopped the ad execs from selling more cores to these markets where more cores is about as useful as 4 WD for that Florida Soccer Mom. Still usefulness often doesn't translate to perception and perception is what drives markets. But we we keep seeing is

Boardrooms are like college coaching staffs ... their job is to prepare to be competitive by acknowledging and planning for trends. And with multiple conferences, we see members of different conferences targeting their recruiting to what their conference will be like 2-3 years down the line. Intel isn't responding to last week's release ... their response is producing what they planned out 3 years ago in order to be ready when / if the market moves in one direction or another.
 
Last edited:

bug

Joined
May 22, 2015
Messages
13,489 (4.02/day)
Processor Intel i5-12600k
Motherboard Asus H670 TUF
Cooling Arctic Freezer 34
Memory 2x16GB DDR4 3600 G.Skill Ripjaws V
Video Card(s) EVGA GTX 1060 SC
Storage 500GB Samsung 970 EVO, 500GB Samsung 850 EVO, 1TB Crucial MX300 and 2TB Crucial MX500
Display(s) Dell U3219Q + HP ZR24w
Case Raijintek Thetis
Audio Device(s) Audioquest Dragonfly Red :D
Power Supply Seasonic 620W M12
Mouse Logitech G502 Proteus Core
Keyboard G.Skill KM780R
Software Arch Linux + Win10
^^^ W-T-F ???
 
Joined
Feb 13, 2012
Messages
522 (0.11/day)
You were talking about IPC, and now you show a graph of total performance. Ryzen 7 2700X needs two more cores and boost beyond thermal specifications to come close to the "average performance" of i7-8700K. So, if you knew what IPC were, you'd know AMD have a long way to go to be on par with Intel on IPC.
You perhaps misread my statement. I strictly mentioned the 2600x because it has the same number of cores and threads as an 8700k. Comparing the 8core is more apples to oranges if we are to look at per thread performance. Amds 6-core ryzen7 is about 12-13% behind an 8700k in relative performance while having 10% lower clockspeed; which was my comparison, not the 2700x as number of cores do not scale linearly especially when we are looking at a relative performance chart.

The time from tapeout to the first engineering sample chip is usually ~4 months for such large dies, then 1-3 cycles of tweaking and waiting is normal before they finally ramp up volume production. This is why the time from a completed design to retail availability is usually 12-15 months, this excludes the time developing the layout of the chip, which anyone knows, takes several months or more on top of that.
Thanks for the valuable information, though it seems we are talking about slightly 2 different things, I was talking about the chip design aspect using already existing IP rather than tape out and manufacturing. But to add to your valuable points, it is important to note that intels 14nm process is very mature and is an inhouse process, which would minimize/reduce the time to market. Not to mention its safe to say that 8700k was released before actually having mass availability/inventory as was apparent in their first few months of release.


The fact police has to intervene again; Haswell-E did lower the price of it's six-core i7-5820K, which relatively speaking is one of the greatest price per performance drops we've seen in the recent decade, and this without any real competition from AMD. Haswell-E did also introduce 8-cores to the consumer market, and Broadwell-E introduced 10-cores (at a hefty price). So claiming that Intel has stagnated is blatantly untrue.



These are simple facts you need to accept.
400usd on a platform that will cost you an extra 250usd+ for motherboard along with requiring quad channel memory costing another 250usd+ is hardly a good deal. Its not a fact check when i was speaking of the mainstream platform being stagnant and you happen to mention an HEDt platform. Because if i was to go there then all i have to say is "threadripper"... Youre welcome.

You lack a basic understanding of how competition works in technology. Doing any changes to the design takes 1-2 product cycles, so the vendors are limited to the following responses in the short term:
- Change price
- Disable/enable built-in features
- Realign binning and move products between lineups; e.g. make a consumer version of an enterprise product.
- Tweak clocks (by a few percent)
Again not once did i mention changing the design, exact opposite. I was refuting the claims that 8700k is an all brand new design. I was saying it was a design based on existing IP with nothing new about it other than the chip layout being 6 cores.
As for competition in technology; I disagree there, competition exists just like with any market, except it happens in the future. The products you work on today is your response for your competition 3 or 4 years out, so when intel design products; they have to predict what their competition will be like at the time. Jim keller was hired by AMD in 2012 and thats when zen design began, and then he left in late 2015 when zen design was practically complete. A bit over a year later zen was released. Throughout this whole time there was multiple announcements on the progress, as well as many leaks. intel was anticipating Zen. So again back to the argument, thanks to AMD(competition) the cpu market is better than ever in terms of performance and cost.

But to clarify:
- Change price
- Disable/enable built-in features
- Realign binning and move products between lineups; e.g. make a consumer version of an enterprise product.
- Tweak clocks (by a few percent)

I fully agree on those points, and remember, 6 cores is nothing new to intel, this whole discussion began simply by stating that intel brought the 6core segment to the mainstream(and clocking them really high); drastically lowering cost for an intel 6core platform, and is intended to even go further with an 8core mainstream system, all in response or anticipation of the competition.
 
Last edited:
Joined
Jul 29, 2014
Messages
484 (0.13/day)
Location
Fort Sill, OK
Processor Intel 7700K 5.1Ghz (Intel advised me not to OC this CPU)
Motherboard Asus Maximus IX Code
Cooling Corsair Hydro H115i Platinum
Memory 48GB G.Skill TridentZ DDR4 3200 Dual Channel (2x16 & 2x8)
Video Card(s) nVIDIA Titan XP (Overclocks like a champ but stock performance is enough)
Storage Intel 760p 2280 2TB
Display(s) MSI Optix MPG27CQ Black 27" 1ms 144hz
Case Thermaltake View 71
Power Supply EVGA SuperNova 1000 Platinum2
Mouse Corsair M65 Pro (not recommded, I am on my second mouse with same defect)
Software Windows 10 Enterprise 1803
Benchmark Scores Yes I am Intel fanboy that is my benchmark score.
I'm still using 9 years old Intel core2 duo 6750 and works perfectly :)

Right one, I am still using Core 2 Extreme X6800 with NVidia NFORCE 570 SLIT-A motherboard as well. Combined with GTX 1070, it is a good gaming machines.
 
Joined
Apr 29, 2018
Messages
127 (0.06/day)
View attachment 100161
Not even close you say? Take a look at the 6 core 2600x that way its more an apples to apples comparison. Its about 12% behind 8700k in relative performance while having the same number of cores and threads. It does also have 500mhz lower clockspeed headroom which translates to a bit over 10%. If we neutralize these numbers you end up with about a 2-3% difference. Of course this is an oversimplified look on numbers, however; even if we went by the 12% number, that is still pretty close. So i guess the real question I need to ask you; what does "not even close" mean to you?


More like tweak and refine. The original post i was responding to quoted a 4 years figure, which is only true for designing an x86 architecture from scratch, and thats for the x86 core IP design. Once that IP is ready, it is then used as part of the lego like blocks of IP into building SOCs. Building an SOC from current existing IP generally takes 6-12 month, and 8700k doesnt introduce any new fundamental IP that 7700k didn't have. How do you think mobile socs come out with new chipsets every year and only shortly after arm even complete their new cpu core designs/ip.


True but irrelevant to our discussion because Intel hasn't introduced a new architecture since skylake, so nothing fundamentally changed since then. As for the memory controller and cache you speak of; it has been tweaks and refinements as the process progressed.


regardless, whether a 6core chip took 1 year or 3, the fact is that it was designed in response (or anticipation) to AMDs new architecture. Think of the last time Intel made a big move. Remember sandy bridge? It was released around the time Bulldozer was supposed to be out. And when bulldozer sucked, intel stagnated and kept us with overpriced 400$+ 4core i7s, which became the new i3s after zen lol. And now intel is touting an 8core mainstream cpu only one year after they finaly moved to 6 cores. So It took 10 years to move from 4 to 6 cores, And now less than 2 years for them move to 8. Again its all thanks to AMD (or good competition in general), its just you're typical free market at play whether you like AMD or not

actually you are not reading the graphs correctly as its hair over 15% difference. the way you are reading the graph would mean 8700k is only 55% faster than the g4560 when its really over two times faster.

Right one, I am still using Core 2 Extreme X6800 with NVidia NFORCE 570 SLIT-A motherboard as well. Combined with GTX 1070, it is a good gaming machines.
ignorance is bliss as they say as you do not even have a clue how much performance you are wasting, that cpu cant even average 60 fps in most halfway modern games and will have pathetically low minimums in most cases. hell your cpu would even be a bottleneck for a 1050 ti in most games.
 
Last edited:
Joined
Mar 6, 2017
Messages
3,244 (1.20/day)
Location
North East Ohio, USA
System Name My Ryzen 7 7700X Super Computer
Processor AMD Ryzen 7 7700X
Motherboard Gigabyte B650 Aorus Elite AX
Cooling DeepCool AK620 with Arctic Silver 5
Memory 2x16GB G.Skill Trident Z5 NEO DDR5 EXPO (CL30)
Video Card(s) XFX AMD Radeon RX 7900 GRE
Storage Samsung 980 EVO 1 TB NVMe SSD (System Drive), Samsung 970 EVO 500 GB NVMe SSD (Game Drive)
Display(s) Acer Nitro XV272U (DisplayPort) and Acer Nitro XV270U (DisplayPort)
Case Lian Li LANCOOL II MESH C
Audio Device(s) On-Board Sound / Sony WH-XB910N Bluetooth Headphones
Power Supply MSI A850GF
Mouse Logitech M705
Keyboard Steelseries
Software Windows 11 Pro 64-bit
Benchmark Scores https://valid.x86.fr/liwjs3
ignorance is bliss as they say as you do not even have a clue how much performance you are wasting, that cpu cant even average 60 fps in most halfway modern games and will have pathetically low minimums in most cases. hell your cpu would even be a bottleneck for a 1050 ti in most games.
Even stepping up from my old Core i5 3570K to the Core i7 8700K that I have now is like stepping into a Ferrari, putting the pedal through the floor and burying yourself in the seat as you watch the needle bury itself in the red.
 

Wolflow

New Member
Joined
May 2, 2018
Messages
4 (0.00/day)
You who deny the effect of Zen to the market disruption that put pressure to Intel and forced them to bring 6 and 8 core-cpus to the market sooner
I think nobody would...

... the issue lies in the fact its just marketing and it would have happend sooner or later anyway because of the multi-cores smartphones : what does this bring to us, really?

4 cores (especially with SMT) were already more than enough (not the 640kB "enough" type) and are still hard to correctly exploit.
8 cores + SMT is overkill for almost all desktop uses and not enough where you need parallelism, which is better suited to GPUs anyway, sync being a severe issue with large parallelism and SIMD being there for local explicit cases...

Simple fact : be it Ryzen 1000 or 2000, AMD is in a similar position it was back when Thuban had to compete against Sandy Bridge, except now there's SMT on both sides (so, AMD has Something barely comparable to the i7). Ryzen shines a bit more with gaming loads, but it's almost on the same level as Sandy Bridge, so it's not disruptive in any other way than simple marketing (well, to be fair, it is by the single die strategy, but it's really an HPC/EPYC feature, useless but present in desktop parts : economic strategy)
 
Joined
Nov 22, 2013
Messages
7 (0.00/day)
Right one, I am still using Core 2 Extreme X6800 with NVidia NFORCE 570 SLIT-A motherboard as well. Combined with GTX 1070, it is a good gaming machines.
Well done. Excellent processor. Unfortunately for me, it's time to buy a new machine. Simply, I'll give this computer to my young sister and I'm getting the i5 8400 and I already have gtx 1060 gigabyte 6 GB and I still have to buy memory 16 GB DDR 4 3200 mhz. I'm not worried for the next 9 years... )

This is my site # 1 for computers even in front of Guru 3d, but I think that in this case the i58400 I think it got too little rating. On all other sites, the rating of over 90 is only a small rating here. I think this processor deserves the least rating 9.
 
Joined
Sep 15, 2015
Messages
1,039 (0.32/day)
Location
Latvija
System Name Fujitsu Siemens, HP Workstation
Processor Athlon x2 5000+ 3.1GHz, i5 2400
Motherboard Asus
Memory 4GB Samsung
Video Card(s) rx 460 4gb
Storage 750 Evo 250 +2tb
Display(s) Asus 1680x1050 4K HDR
Audio Device(s) Pioneer
Power Supply 430W
Mouse Acme
Keyboard Trust
Joined
Aug 14, 2017
Messages
74 (0.03/day)
thank you Intel!

and it IS 8-core, 9700K, so at last we can real battle without amd handicap

ryzen 2700x 8-core VS intel 9700K 8-core

winner is...

should be without any complain or hesitate

ryzen 2600 6-core VS intel 8700K 6-core

and
ryzen 2700x 8-core VS intel 9700K 8-core

is it looks fair and right...
 
Joined
Aug 16, 2016
Messages
1,025 (0.35/day)
Location
Croatistan
System Name 1.21 gigawatts!
Processor Intel Core i7 6700K
Motherboard MSI Z170A Krait Gaming 3X
Cooling Be Quiet! Shadow Rock Slim with Arctic MX-4
Memory 16GB G.Skill Ripjaws V DDR4 3000 MHz
Video Card(s) Palit GTX 1080 Game Rock
Storage Mushkin Triactor 240GB + Toshiba X300 4TB + Team L3 EVO 480GB
Display(s) Philips 237E7QDSB/00 23" FHD AH-IPS
Case Aerocool Aero-1000 white + 4 Arctic F12 PWM Rev.2 fans
Audio Device(s) Onboard Audio Boost 3 with Nahimic Audio Enhancer
Power Supply FSP Hydro G 650W
Mouse Cougar 700M eSports white
Keyboard E-Blue Cobra II
Software Windows 8.1 Pro x64
Benchmark Scores Cinebench R15: 948 (stock) / 1044 (4,7 GHz) FarCry 5 1080p Ultra: min 100, avg 116, max 133 FPS
Keep in mind that if AMD didn't kick Intel between the legs with their excellent Ryzen CPU's, we would still have 4C/8T i7 8700K and upcoming 4C/8T i7 9700K, while i5's would be 4C/4T and i3's 2C/4T. Performance wise, it would be "Intel's classic" - increase between 4-8% from previous to new CPU generation. :rolleyes:
 

bug

Joined
May 22, 2015
Messages
13,489 (4.02/day)
Processor Intel i5-12600k
Motherboard Asus H670 TUF
Cooling Arctic Freezer 34
Memory 2x16GB DDR4 3600 G.Skill Ripjaws V
Video Card(s) EVGA GTX 1060 SC
Storage 500GB Samsung 970 EVO, 500GB Samsung 850 EVO, 1TB Crucial MX300 and 2TB Crucial MX500
Display(s) Dell U3219Q + HP ZR24w
Case Raijintek Thetis
Audio Device(s) Audioquest Dragonfly Red :D
Power Supply Seasonic 620W M12
Mouse Logitech G502 Proteus Core
Keyboard G.Skill KM780R
Software Arch Linux + Win10
Keep in mind that if AMD didn't kick Intel between the legs with their excellent Ryzen CPU's, we would still have 4C/8T i7 8700K and upcoming 4C/8T i7 9700K, while i5's would be 4C/4T and i3's 2C/4T. Performance wise, it would be "Intel's classic" - increase between 4-8% from previous to new CPU generation. :rolleyes:
I'm keeping that in mind, but I'm not sure how it helps me ;)

Other things AMD has done right: IPC vs pure GHz, that put Netburst to rest (otherwise 150W+ TDP would be a common sight) and x86_64 so we don't need Itanium for 64bit computing.
 
Top