Friday, August 16th 2024

TSMC Reportedly to Manufacture SoftBank's AI Chips, Replacing Intel

SoftBank has reportedly decided against using Intel's foundry for its ambitious AI venture, Project Izanagi, and is opting for TSMC instead. The conglomerate aims to challenge NVIDIA in the AI accelerator market by developing its own AI processors. This decision marks another setback for Intel, which has faced several challenges recently. In February 2024, reports emerged that SoftBank's CEO, Masayoshi Son, planned to invest up to $100 billion to create a company similar to NVIDIA, focused on selling AI accelerators. Although SoftBank initially worked with Intel, it recently switched to TSMC, citing concerns about Intel's ability to meet demands for "volume and speed."

The decision, reported by the Financial Times, raises questions about Intel's future involvement and how SoftBank's ownership of Arm Holdings will factor into the project. While TSMC is now SoftBank's choice, the foundry is already operating at full capacity, making it uncertain how it will accommodate this new venture. Neither SoftBank, Intel nor TSMC has commented on the situation, but given the complexities involved, it will likely take time for this plan to materialize. SoftBank will need to replicate NVIDIA's entire ecosystem, from chip design to data centers and a software stack rivaling CUDA, a bold and ambitious goal.
In July, SoftBank expanded its semiconductor portfolio by acquiring Graphcore, a British AI chip designer. While the acquisition amount remains undisclosed, this move is consistent with SoftBank's significant presence in the chip industry. The company already holds a majority stake in Arm, another British chip designer, which it purchased for $32 billion in 2016. Despite Arm's return to the stock market last year, SoftBank maintained its controlling interest.

In a separate development, Intel divested its position in Arm. The American tech giant sold its 1.18 million shares, generating approximately $146.7 million from the transaction.
Source: Data Centre Dynamics
Add your own comment

37 Comments on TSMC Reportedly to Manufacture SoftBank's AI Chips, Replacing Intel

#1
TumbleGeorge
Too many strikes on Intel. May can try ask for bankruptcy?
Posted on Reply
#2
Kn0xxPT
TumbleGeorgeToo many strikes on Intel. May can try ask for bankruptcy?
Well ... probably not yet ... but i suspect that Nvidia is keeping an eye on Intel "buy" options ... or CPU co-lab.
Posted on Reply
#3
Frank_100
TumbleGeorgeToo many strikes on Intel. May can try ask for bankruptcy?
Intel is going to be just fine.

They were last to move to EUV and have suffered.

They are going to be the first to move to high NA EUV.
They will have a 1-2 year advantage on everybody.

Now if they would start making chips for the HEDT market (16 P-cores, avx512).
Posted on Reply
#5
john_
Intel needs to build it's own CPUs also in it's fabs, meaning they need to have extra capacity of good wafers to also cover orders from their customers. Meaning this isn't necessarily bad news for Intel. Their future process nodes might doing just fine, but not being currently at the needed capacity to cover also others.
Posted on Reply
#6
kondamin
So Softbank is going to make a processor that is bound for failure at a more expensive plant that is going to be less willing to cater to their whims, and other great move by softbank brilliant.
Posted on Reply
#7
tfp
kondaminSo Softbank is going to make a processor that is bound for failure at a more expensive plant that is going to be less willing to cater to their whims, and other great move by softbank brilliant.
They really should be at Intel if they want to fail, amirite?
Posted on Reply
#8
Frank_100
AI costs have to come down.
This will probably mean a new architecture.
In memory computing with phase change memory or ReRam.
Posted on Reply
#9
kondamin
tfpThey really should be at Intel if they want to fail, amirite?
Intel that has to prove it self in the foundry business for third parties will be far more willing to help them out than overbooked tsmc.

so no while it’s very unlikely SoftBank will get anywhere in its latest foley it’s going to cost them less breaking ground with the help of intel than it will with tsmc
Posted on Reply
#10
tfp
kondaminIntel that has to prove it self in the foundry business for third parties will be far more willing to help them out than overbooked tsmc.

so no while it’s very unlikely SoftBank will get anywhere in its latest foley it’s going to cost them less breaking ground with the help of intel than it will with tsmc
They could have gone Samsung as well, tsmc must be providing something they think they need.
Posted on Reply
#11
kondamin
tfpThey could have gone Samsung as well, tsmc must be providing something they think they need.
yes it being the number #1 foundry in the world should go over great with the share holders and future investors.
Tsmc building a fab in Japan would be a second.

Technology wise meeeh, they need a design first
Posted on Reply
#12
InVasMani
Frank_100AI costs have to come down.
This will probably mean a new architecture.
In memory computing with phase change memory or ReRam.
We have started hearing about compression on system memory type technologies emergence which is nice. I mentioned that ages ago that would be really great to see DIMM's integrating a chip to compress the memory storage directly to both help in speeding up data transmission and increase storage capacity. Even a fairly minor bit of that initially and improving it over time should go a long way. I feel there is defiantly room for it to mature. Instead of double dimm's for capacity we might start seeing double sided dimm's, but one side is for capacity on the opposite side are chips that of similar size with TSV that compress/decompress the individual DRAM banks in real time. At least that's my thoughts on possible way to go about it. It seems like a pretty plausible methodology to go about it. The same could be applied with NAND as well.

I don't think GPU's likely be doing that given they already have a GPU core that's perfectly suited for math and additionally the higher capacity I think is both more vital and precious in terms of real estate space than memory dimm's are on a PC system for the average consumer user. In the case of servers perhaps not, but they don't represent standard usage so needs will vary in some cases you might need tons of capacity while in others.

Another angle on the memory thing is maybe they'll license FPGA technology or something and could both be used for compression/decompression at times, but also depending on usage be reconfigured as programmable co-processors integrated into system memory. That would be kind of neat novelty to see happen or maybe they'll license CPU/GPU cores from various manufacturers to create a kind of SOC on the memory dimm's themselves that can do compression/decompression as well as other specialized tasks. That's just me trying to think outside the box though on new ways to fuse and merge some of these technology components together in some new ways. I'm sure there would be some positives and negatives to any of these idea's naturally.
Posted on Reply
#13
tfp
At one point Intel and I think AMD had plans to put FPGA on package. The problem with FPGA is they are great for proof of concept but generally they don't seem to be used long term. People move to ASIC, GPUs, or something else.
Posted on Reply
#14
R0H1T
Frank_100They will have a 1-2 year advantage on everybody.
Advantage with machines? Is that why they squandered a 5(?) year lead when they moved to 22nm & everyone was so far behind :ohwell:
Posted on Reply
#15
ScaLibBDP
tfpAt one point Intel and I think AMD had plans to put FPGA on package. The problem with PFGA is they are great for proof of concept but generally they don't seem to be used long term. People move to ASIC, GPUs, or something else.
>>...At one point Intel and I think AMD had plans to put FPGA on package....

Intel spent a lot of money and did it. A complete "infrastructure", I mean a package with CPU-plus-FPGA and software is Too expensive. It makes sense Only for Data Centers, and unfortunately for Intel, many providers of Data Center solutions use GPUs from NVIDIA and AMD.
Posted on Reply
#16
Daven
"...citing concerns about Intel's ability to meet demands for "volume and speed."

Or at least they could do it fast enough at scale as long as Soft Bank was okay with the chips oxidizing and frying due to runaway voltage after one year of operation.
Posted on Reply
#17
ScaLibBDP
>>...SoftBank will need to replicate NVIDIA's entire ecosystem, from chip design to data centers and a software
>>stack rivaling CUDA, a bold and ambitious goal...

$100 Billion is Not enough. I'm sure for 99.99% that it will never happen.
Posted on Reply
#18
remixedcat
kondaminIntel that has to prove it self in the foundry business for third parties will be far more willing to help them out than overbooked tsmc.

so no while it’s very unlikely SoftBank will get anywhere in its latest foley it’s going to cost them less breaking ground with the help of intel than it will with tsmc
w the raptorlakegate issue rn ppl aren't gonna trust intel fabs
Posted on Reply
#19
tfp
ScaLibBDP>>...At one point Intel and I think AMD had plans to put FPGA on package....

Intel spent a lot of money and did it. A complete "infrastructure", I mean a package with CPU-plus-FPGA and software is Too expensive. It makes sense Only for Data Centers, and unfortunately for Intel, many providers of Data Center solutions use GPUs from NVIDIA and AMD.
Yeah looks like as far as AMD has gotten is putting an PFGA on a x86 board and is still targeting embedded.

www.eejournal.com/article/amds-x86-cpu-and-fpga-tango-on-sapphire-technologys-embedded-pc-motherboard/
Posted on Reply
#20
Wirko
InVasManiWe have started hearing about compression on system memory type technologies emergence which is nice. I mentioned that ages ago that would be really great to see DIMM's integrating a chip to compress the memory storage directly to both help in speeding up data transmission and increase storage capacity. Even a fairly minor bit of that initially and improving it over time should go a long way. I feel there is defiantly room for it to mature. Instead of double dimm's for capacity we might start seeing double sided dimm's, but one side is for capacity on the opposite side are chips that of similar size with TSV that compress/decompress the individual DRAM banks in real time. At least that's my thoughts on possible way to go about it. It seems like a pretty plausible methodology to go about it. The same could be applied with NAND as well.

I don't think GPU's likely be doing that given they already have a GPU core that's perfectly suited for math and additionally the higher capacity I think is both more vital and precious in terms of real estate space than memory dimm's are on a PC system for the average consumer user. In the case of servers perhaps not, but they don't represent standard usage so needs will vary in some cases you might need tons of capacity while in others.

Another angle on the memory thing is maybe they'll license FPGA technology or something and could both be used for compression/decompression at times, but also depending on usage be reconfigured as programmable co-processors integrated into system memory. That would be kind of neat novelty to see happen or maybe they'll license CPU/GPU cores from various manufacturers to create a kind of SOC on the memory dimm's themselves that can do compression/decompression as well as other specialized tasks. That's just me trying to think outside the box though on new ways to fuse and merge some of these technology components together in some new ways. I'm sure there would be some positives and negatives to any of these idea's naturally.
Doesn't seem logical. You'd have to transfer uncompressed data, which means more data, over the memory bus. A better example of processing in memory would be an algorithm to search for certain patterns in memory. In this case, only the searched part would have to be sent over the bus to the processor, instead of everything.
Posted on Reply
#21
ScaLibBDP
WirkoDoesn't seem logical. You'd have to transfer uncompressed data, which means more data, over the memory bus. A better example of processing in memory would be an algorithm to search for certain patterns in memory. In this case, only the searched part would have to be sent over the bus to the processor, instead of everything.
>>...Doesn't seem logical...

Why? On Windows platforms compression of files is a very-very old and mature technology! Next, starting from Windows 10 there is a Memory Compression feature and take a look at a Task Manager ( memory related attributes ). So, from software side Microsoft has implemented it Twice . The only problem is that a hardware based solution for Memory Compression will be more complex. For example, all external Memory Extenders are Not for a regular PC consumer because all these Memory Extenders are very expensive. Take a look at prices for CXL Memory Expander by Samsung.
Posted on Reply
#22
Wirko
ScaLibBDP>>...Doesn't seem logical...

Why? On Windows platforms compression of files is a very-very old and mature technology! Next, starting from Windows 10 there is a Memory Compression feature and take a look at a Task Manager ( memory related attributes ). So, from software side Microsoft has implemented it Twice . The only problem is that a hardware based solution for Memory Compression will be more complex. For example, all external Memory Extenders are Not for a regular PC consumer because all these Memory Extenders are very expensive. Take a look at prices for CXL Memory Expander by Samsung.
I'm not saying ompression in general is a bad idea. But if you're doing it on the memory modules, you need to transfer uncompressed data over the memory bus, and that may be a serious bottleneck. If the CPU (or whatever sort of processor) is doing it, it sends and receives already compressed data to/from memory.
Remember DirectStorage and GPU decompression? One of its goals was (is?) to avoid sending uncompressed data to the GPU over (relatively) slow PCIe bus.
There's at least one more issue: the compression ratio is unpredictable. How would a smart DIMM handle that?
Posted on Reply
#23
DaemonForce
Frank_100Intel is going to be just fine.
Hahahahaah ✖
That's a black swan moment.
Intel is done.
SoftBank and TSMC wasn't in my bingo chart this year and that's some dark magic.
ScaLibBDP>>...SoftBank will need to replicate NVIDIA's entire ecosystem, from chip design to data centers and a software
>>stack rivaling CUDA, a bold and ambitious goal...
It's also SoftBank. They find a way.
Posted on Reply
#24
R-T-B
DaemonForceHahahahaah ✖
That's a black swan moment.
Intel is done.
SoftBank and TSMC wasn't in my bingo chart this year and that's some dark magic.
He's not wrong. Intel is NOT done. They haven't even spun off their fab arm for a quick cash influx yet like AMD already did many years ago, for example. They are hurting right now, but awash with options to sell and I can promise you no one is discussing bankruptcy outside enthusiast fanboys.
Posted on Reply
#25
DaemonForce
Bankruptcy isn't quite on the table when Intel has a thumb in so many pies but it's a terrifying thing to think about.
Intel decided to ignore all the fires, hasn't gotten their new fab running yet and the clock is still ticking.
It's more characteristic of AMD to fumble and I get that but when every tech news outlet screeches about Intel selling"defective" chips, class action and just general REEEE...
Who is waiting in line to be the next guinnea pig customer for an unproven fab that may be cranking out another expensive continuation of these problems?
The server space isn't interested in replacing mission critical chips every 2 months to 2 years over catastrophic degradation or anything "this fails proven automation" checkbox.
The usual consumers are way more likely to risk it but I won't do it and I don't know anyone that does.
Posted on Reply
Add your own comment
Aug 29th, 2024 06:15 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts