NVIDIA's most recent FY3Q24 financial reports reveal record-high revenue coming from its data center segment, driven by escalating demand for AI servers from major North American CSPs. However, TrendForce points out that recent US government sanctions targeting China have impacted NVIDIA's business in the region. Despite strong shipments of NVIDIA's high-end GPUs—and the rapid introduction of compliant products such as the H20, L20, and L2—Chinese cloud operators are still in the testing phase, making substantial revenue contributions to NVIDIA unlikely in Q4. Gradual shipments increases are expected from the first quarter of 2024.
The US ban continues to influence China's foundry market as Chinese CSPs' high-end AI server shipments potentially drop below 4% next year
TrendForce reports that North American CSPs like Microsoft, Google, and AWS will remain key drivers of high-end AI servers (including those with NVIDIA, AMD, or other high-end ASIC chips) from 2023 to 2024. Their estimated shipments are expected to be 24%, 18.6%, and 16.3%, respectively, for 2024. Chinese CSPs such as ByteDance, Baidu, Alibaba, and Tencent (BBAT) are projected to have a combined shipment share of approximately 6.3% in 2023. However, this could decrease to less than 4% in 2024, considering the current and potential future impacts of the ban.
China to expand investment in proprietary ASICs and develop general-purpose AI chips due to limited high-end AI chip demand
Facing the risk of expanded restrictions arising from the US ban, TrendForce believes Chinese companies will continue to buy existing AI chips in the short term. NVIDIA's GPU AI accelerator chips remain a top priority—including existing A800 or H800 inventories and new models like H20, L20, and L2—designed specifically for the Chinese market following the ban. In the long term, Chinese CSPs are expected to accelerate, with Alibaba's T-Head and Baidu being particularly active in this area, relying on foundries like TSMC and Samsung for production.
At the same time, major Chinese AI firms, such as Huawei and Biren, will continue to develop general-purpose AI chips to provide AI solutions for local businesses. Beyond developing AI chips, these companies aim to establish a domestic AI server ecosystem in China. TrendForce recognizes that a key factor in achieving success will come from the support of the Chinese government through localized projects, such as those involving Chinese telecom operators, which encourage the adoption of domestic AI chips.
Edge AI servers: A potential opportunity for Chinese firms amid high-end AI chip development constraints
A notable challenge in developing high-end chips in China is the limited access to advanced manufacturing technology. This is particularly true for Huawei, which remains on the US Entity List and relies on domestic foundries like SMIC for production. Despite SMIC's advancements, it faces similar issues created by the US ban—including difficulties in obtaining key advanced manufacturing equipment and potential yield issues. TrendForce believes that in trying to overcome these limitations, China may find opportunities in the mid to low-range edge AI server market. These servers, with lower AI computational demands, cater to applications like commercial ChatBOTs, video streaming, internet platforms, and automotive assistance systems. They might not be fully covered by US restrictions, presenting a possible growth direction for Chinese firms in the AI market.
20 Comments on NVIDIA Experiences Strong Cloud AI Demand but Faces Challenges in China, with High-End AI Server Shipments Expected to Be Below 4% in 2024
Really their stock price is like rocket this year. I'm happy I got in early enough but I wont be adding to my portfolio and I'm not selling.
I wonder with a rumor going around that OpenAI made a breakthrough on artificial general intelligence might mean to Nvidia.
Clearly NVIDIA will have to massively step up their imaginary lawbreaking in order to make up for this incredible shortfall! Also, and let me be completely clear here: you conspiracy theorists are complete and utter idiots, which is why you're broke and shitposting on tech forums, and NVIDIA shareholders aren't. LMAO, and where do you think they going to fab that custom silicon? Where everyone else does, i.e. TSMC.
You know, the same TSMC that is massively backlogged with orders?
The company that NVIDIA has a contract with for a large percentage of wafer allocation, and all those other companies don't?
The company that isn't going to give time of day to anyone who isn't willing to commit to massive volumes, which the aforementioned players can't?
Or maybe they're going to use Intel... oh wait, they're outsourcing to TSMC too now.
Guess it's GloFo then, I'm sure that'll work just great!
ASICs are so many times more efficient that you can manufacture them in a more mature and cheaper process whose capacity is not compromised. TSMC's capacity is also increasing not decreasing.
Or just use GPU? Bad times I'm guessing for buyers.
There's certainly enough evidence on show to display that the AI and ML market in china does matter to someone.
Let's hope the raids on Nvidia offices recently found a non loop hole jumping company behaving as it should then.
Because there's been a few coincidences lately.
like Nvidia bios are cracked.
Oh we are eol 4090/80 soon. Ready for super's.
Oh look millions of 4090 are now getting taken apart in china.
You have to design your custom ASICs first.
You have to design the software that they're going to use.
You have to write that software.
You have to migrate over all your current software, or write a translation layer that does so.
All of the above take time.
All of the above involve significant risk.
Tesla is the only company that has tried this and that's because they're run by a man rich enough to not give a s**t. Other companies would much rather stick with what they know and what works, and just buy more hardware. It's not the cost that's the problem, it's the supply, which leads to... And they're going to offer that capacity to the companies that already have massive long-term contracts with them.
Companies like, I dunno, NVIDIA. You need help.
The state of the AI market can be illustrated by considering Tesla as an example. A while back, the company invested in a market that few believed in until there was an explosion in demand. Tesla, being ahead of the curve, was able to set its fees freely. However, with increasing competition, the battle over pricing levels is exerting pressure on profit margins, compelling companies to invest in maximizing efficiency. Simply swap Tesla for NVIDIA in this historical context, and you will observe the pattern repeating. Even Huang admitted the existence of this risk:
"Nvidia CEO Jensen Huang says his AI powerhouse is ‘always in peril’ despite a $1.1 trillion market cap: ‘We don’t have to pretend…we feel it’ "
Nvidia CEO says his AI powerhouse is ‘always in peril’ | Fortune
Nvidia sells GPUs at a very high price, which also encourages companies to look for alternatives.
2x is feasible. Mind you nvidia could easily do the same.
There's a big incentive to cut operating costs like this when Nvidia sells GPUs costing tens of thousands of dollars each.
"In 2013, Google realized that unless they could create a chip that could handle machine learning inference, they would have to double the number of data centers they possessed. Google claims that the resulting TPU has “15–30X higher performance and 30–80X higher performance-per-watt” than current CPUs and GPUs."
"Although both TPUs and GPUs can do tensor operations, TPUs are better at big tensor operations, which are more common in neural network training than 3D graphics rendering. The TPU core of Google is made up of two parts a Matrix Multiply Unit and a Vector Processing Unit. When it comes to the software layer, an optimizer is used to switch between bfloat16 and bfloat32 operations (where 16 and 32 are the number of bits) so that developers don’t have to rewrite their code. As a result, the TPU systolic array architecture has a large density and power advantage, as well as a non-negligible speed advantage over a GPU."
copperpod.medium.com/tensor-processing-unit-tpu-an-ai-powered-asic-for-cloud-computing-81ffb1256dd2
I stand by what I said: independent verification or I'm not buying anything more than double the efficiency.