However, during yesterday's briefing, Timothy Prickett Morgan from TheNextPlatform asked Jensen Huang, "Will you actually take an implementation of something like Neoverse first and make an Nvidia-branded CPU to drive it into the data center? Will you actually make the reference chip for those who just want it and actually help them run it?"
"Well, the first of all you've made an amazing observation, which is all three options are possible," Huang responded, "[...] So now with our backing and Arm’s serious backing, the world can stand on that foundation and realize that they can build server CPUs. Now, some people would like to license the cores and build a CPU themselves. Some people may decide to license the cores and ask us to build those CPUs or modify ours."
"It is not possible for one company to build every single version of them," Huang continued, "but we will have the entire network of partners around Arm that can take the architectures we come up with and depending on what's best for them, whether licensing the core, having a semi-custom chip made, or having a chip that we made, any of those any of those options are available. Any of those options are available, we're open for business and we would like the ecosystem to be as rich as possible, with as many options as possible."
Nvidia already builds some ARM-based processors for lower-power applications, but having access to ARM's engineering talent will undoubtedly speed the process of designing custom Nvidia data center chips. The company will also have overall control of the ISA, and it's unclear if Nvidia would be compelled to share all of the future ARM architecture innovations with ARM licensees.
During the call, Huang also said he wants to speed up the Neoverse roadmap to bring innovations to ARM licensees faster. Naturally, it would also be in Nvidia's best interest (at least in the short term) to broaden the ecosystem of ARM server chips, and that would require multiple options from a variety of chipmakers.
Nvidia could also drive Nvidia GPU-specific optimizations like CPU/GPU memory coherence into the ARM architecture, which would then incentivize other chip makers to use Nvidia's GPUs in their solutions. That approach could help solidify Nvidia's position as the premier AI solutions provider in the data center.