
AMD Pensando Pollara 400 AI NIC Now Available and Shipping to Customers
To effectively train and deploy generative AI, large language models, or agentic AI, it's crucial to build parallel computing infrastructure that offers the best performance to meet the demands of AI/ML workloads but also offers the kind of flexibility that the future of AI demands. A key aspect for consideration is the ability to scale-out the intra-node GPU-GPU communication network in the data center.
At AMD, we believe in preserving customer choice by providing customers with easily scalable solutions that work across an open ecosystem, reducing total cost of ownership—without sacrificing performance. Remaining true to that ethos, last October, we announced the upcoming release of the new AMD Pensando Pollara 400 AI NIC. Today we're excited to share the industry's first fully programmable AI NIC designed with developing Ultra Ethernet Consortium (UEC) standards and features is available for purchase now. So, how has the Pensando Pollara 400 AI NIC been uniquely designed to accelerate AI workloads at scale?
At AMD, we believe in preserving customer choice by providing customers with easily scalable solutions that work across an open ecosystem, reducing total cost of ownership—without sacrificing performance. Remaining true to that ethos, last October, we announced the upcoming release of the new AMD Pensando Pollara 400 AI NIC. Today we're excited to share the industry's first fully programmable AI NIC designed with developing Ultra Ethernet Consortium (UEC) standards and features is available for purchase now. So, how has the Pensando Pollara 400 AI NIC been uniquely designed to accelerate AI workloads at scale?