Thursday, July 11th 2024
AMD Plans to Use Glass Substrates in its 2025/2026 Lineup of High-Performance Processors
AMD reportedly plans to incorporate glass substrates into its high-performance system-in-packages (SiPs) sometimes between 2025 and 2026. Glass substrates offer several advantages over traditional organic substrates, including superior flatness, thermal properties, and mechanical strength. These characteristics make them well-suited for advanced SiPs containing multiple chiplets, especially in data center applications where performance and durability are critical. The adoption of glass substrates aligns with the industry's broader trend towards more complex chip designs. As leading-edge process technologies become increasingly expensive and yield gains diminish, manufacturers turn to multi-chiplet designs to improve performance. AMD's current EPYC server processors already incorporate up to 13 chiplets, while its Instinct AI accelerators feature 22 pieces of silicon. A more extreme testament is Intel's Ponte Vecchio, which utilized 63 tiles in a single package.
Glass substrates could enable AMD to create even more complex designs without relying on costly interposers, potentially reducing overall production expenses. This technology could further boost the performance of AI and HPC accelerators, which are a growing market and require constant innovation. The glass substrate market is heating up, with major players like Intel, Samsung, and LG Innotek also investing heavily in this technology. Market projections suggest explosive growth, from $23 million in 2024 to $4.2 billion by 2034. Last year, Intel committed to investing up to 1.3 trillion Won (almost one billion USD) to start applying glass substrates to its processors by 2028. Everything suggests that glass substrates are the future of chip design, and we await to see first high-volume production designs.
Sources:
Business Korea, via Tom's Hardware
Glass substrates could enable AMD to create even more complex designs without relying on costly interposers, potentially reducing overall production expenses. This technology could further boost the performance of AI and HPC accelerators, which are a growing market and require constant innovation. The glass substrate market is heating up, with major players like Intel, Samsung, and LG Innotek also investing heavily in this technology. Market projections suggest explosive growth, from $23 million in 2024 to $4.2 billion by 2034. Last year, Intel committed to investing up to 1.3 trillion Won (almost one billion USD) to start applying glass substrates to its processors by 2028. Everything suggests that glass substrates are the future of chip design, and we await to see first high-volume production designs.
76 Comments on AMD Plans to Use Glass Substrates in its 2025/2026 Lineup of High-Performance Processors
Regarding your efficiency claims, computerbase paints a very clear picture. The 13700k at 88w is faster than the 7700x at 142 watts (!!!!) in their MT test suite. Do you accept these results or are they fake? The 7700x score 0.76 performance / watt while the 13700k scores 1.29. But sure, let's pretend amd is more efficient, why not :D
Since you were bent on MT, 7700x takes the cake while ST the 13700k edges out. We're going with Cinebench since that's where you were pulling the 50% MT performance figures, not some random tech site suite.
It's obvious from the start that it would be better with twice the threads, otherwise it would be a big fail and nobody says that about RL. And why 88W? Neither CPUs are 88W parts out of the box, or 142W for that matter. RL is competing, quite fairly against AMD but are less power efficient and their OOB power target are too high. That's it, no need to compare apples to oranges.
Since you like cinebench, here are the results from cinebench. At 88w it is faster and more efficient than the 7700x at 142 watts. If that seems to you like amd is more efficient sure, whatever, believe what you will. If facts can't change your mind I don't know what else to present to you to make it happen.
I'm dipping out. Already a page of off-topic and whatever else. You do you dude but you've made your stance.
Before you throw another hidden call-out, insult, snark, whatever yes I know the 13700k is a fantastic chip, does work, whatever. I already know Intel makes good chips as my wife literally uses a 14900k for content creation.
It's quite obvious when looking at Intel cashflow that they didn't design their chip to be cheap to produce. Also chip size is a big indicator, Intel i7 and i9 are very big, about twice as much as the 7700X and 50% more than both 79_0X. They should not compete in the same segment. For a consumer, in the short term, it's good value but it's not a great technological achievement.
When AMD had their wins with Zen1 due to them pricing 8 cores CPU with MT against 4 cores without MT it was great for AMD (and consumers) because they were coming back from very far and could make money at this price point. You know, being competitive.
The issue is Intel being competitive in performance per dollar due to them slashing their prices to the point of losing money. That's not being competitive in a meaningful way, that's trying to stay afloat. It's more similar to the post Phenom AMD situation, when they were losing ground with each generation. We can see the same pattern : more power, more cores and cut prices. Their prices were very competitive for specific workloads but they were not making money and came very close to death.
Given that you also need a better case and cooler and motherboard VRM to handle the i7 at it's default settings than the already cheaper 7700X I really don't see them in the same price bracket. Because even if you tweak the i7 to make it run (very efficiently, that's true) at 60 W the motherboard vendor still has to build the VRMs for 250 W. On AM5 even the worst A620 board will provide for a stock 7700X with no sweat. And if you're not hypermiling your CPU, @ stock i7 require watercooling to avoid throttling. AM5 also get hot but it's far more manageable with an air cooler.
For a fully equipped PC that doesn't make turbine noise I made price estimation with LGA1700 and AM5. With same total prices Intel was only on par in value after their prices' cut and their performance lead not significant enough to accept the drawbacks. As Intel CPUs would require quite a lot of tweaking to not release too much heat that would cook me in my office during summer.
13900K a bit faster than AMD 7950X (both 32T)
13700K a bit faster than AMD 7900X (both 24T)
13600K a bit faster than AMD 7700X (20T/16T)
To me it's Intel trying to undercut the Zen4s to try and show value. Nobody (but you, apparently) believed the 13700K and 7700X were competitors, Intel CPU being twice the size and TDP they were never in the same range to begin with. AMD certainly didn't believe so because 2 months later prices were settling already to put the 7900 in front of the 13700K.
In the same way I could try and argue that a tweaked Radeon 7900XT in a specific workloads is more power efficient than a 4070 Ti. That's not technically wrong but there is still a long way to conclude of RDNA3 overall better value and efficiency against Ada. People would call me out with reason. No we didn't, the 13700K under 105 W (the 7700X TDP, OC is not fair game) is only 2 to 10% faster. Definitely not "much faster" in my book. Or far far behind the 7900X if I'm using your words.
And tweaking your CPU doesn't change the requirement for the motherboard. They need to be able to provide stock power and Intel stock power is far higher than AMD.
Again, the issue everybody tries to explains you of is not that Raptor Lake is inherently flawed but Intel running their CPU so high on the power curve. Making them less efficient than AM4 CPU on almost all metrics. You cannot in fairness compare an heavily underclocked CPU to a stock one.
Naming and marketing are fluid for both companies. AMD obviously never intended the 7700X to compete with the 13700K otherwise they'd have kept it at the same price. No I'm not agreeing to that. Intel's are on par. As market dictate. Both AMD and Intel had to adjust their prices a lot since those CPU were announced.
Also you seem to forget that there are more than two CPU on the line-up. And they are not all as well priced as the 13700K. That doesn't make the rest of the RDN3 GPUs more efficient. And they are other use cases where the 4070 Ti is more efficient. I wrote that I was cherry picking, that it was a form of lie. Why would you agree to that? Apples, oranges. At isopower the 13700K is less than 10% faster on every point that doesn't require an artificial increases in the 7700X consumption. That is not huge. And still behind the 7900X (you shouldn't wear blinds). Apples, oranges. An Intel MB only able to provide 100W is lacking a lot and would be very much crippling anything but the slowest CPUs. Not so for AMD. For anyone but an hypermiler VRMs needs to be stronger for Intel CPUs than AMD's. No, he said that they are matching or better. You managed to find a single Intel CPU that matches AMD. Well, you had to go for a discounted, previous generation CPU but you found one. Great but not really on point. Also I'm the kind of people that consider platform durability, heat and noise to be a bigger part of "performance" than the Cinebench score.
Computerbase mixed load is quite similar to TPU selection, it's average power consumption should not be to dissimilar, with the specifics of their tests 88 W would be quite realistic.
142 W is far into OC territory, obviously efficiency drops. That's the same reason so many Intel SKU have terrible efficiency out of the box. They'd have released them as 95/125W real TDP and priced them accordingly to their performance for this power the story would be very different. Nobody says the contrary. But also the same applies to AMD with the X and non X variant and Intel choose to make a fool of themselves to try and one up AMD.
Choosing 142 W as a reference for comparison even though you've been told repeatedly how wrong it is makes you a liar, a dense one at that. Following your way a 13600K needs substantially more power to try and match a 7900X and still loses in every metric. Great we learned nothing because they don't compete together.
As long as you keep cheery picking specific SKU to try and generalize to the whole product stack you'll have people replying that it's wrong. I'm not the first to react to your foolishness ont the matter and will let someone else do it next time.
Stock - 135w. Sure bro.
I read somewhere that Intel Atom is the same as Pentium, but only half of it. So it's half of a regular processor
You're right, but it's not like ST is half of the benchmarks in 2024. Of course I accept it.
What I don't accept is you picking ONE benchmark. It's called cherry picking.
No reviewer does that even for GPU's, contrary to what you've said. They'll show an average from several benchmarks. No one takes the number from the game with the biggest gain and use that number to describe the performance of a GPU. I think you know that.
Maybe you're just chasing highest possible numbers, similar to bandwidth, but I don't because it doesn't help me as a customer, and I don't think I'm the only one. That's also why software bottlenecks should be accepted becuase they will still be there when the end user wants to run the same program.
The 7700X is still faster in 9 benchmarks, overall it's slower, but not by 50%.
In the end we're both wrong. You're using one single benchmark, and I'm using an average that includes ST.
The thread is about AMD using glass subtrate
I'm not guessing