• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

AMD Believes EPYC CPUs & Instinct GPUs Will Accelerate AI Advancements

T0@st

News Editor
Joined
Mar 7, 2023
Messages
2,284 (3.28/day)
Location
South East, UK
If you're looking for innovative use of AI technology, look to the cloud. Gartner reports, "73% of respondents to the 2024 Gartner CIO and Tech Executive Survey have increased funding for AI." And IDC says that AI: "will have a cumulative global economic impact of $19.9 trillion through 2030." But end users aren't running most of those AI workloads on their own hardware. Instead, they are largely relying on cloud service providers and large technology companies to provide the infrastructure for their AI efforts. This approach makes sense since most organizations are already heavily reliant the cloud. According to O'Reilly, more than 90% of companies are using public cloud services. And they aren't moving just a few workloads to the cloud. That same report shows a 175% growth in cloud-native interest, indicating that companies are committing heavily to the cloud.

As a result of this demand for infrastructure to power AI initiatives, cloud service providers are finding it necessary to rapidly scale up their data centers. IDC predicts: "the surging demand for AI workloads will lead to a significant increase in datacenter capacity, energy consumption, and carbon emissions, with AI datacenter capacity projected to have a compound annual growth rate (CAGR) of 40.5% through 2027." While this surge creates massive opportunities for service providers, it also introduces some challenges. Providing the computing power necessary to support AI initiatives at scale, reliably and cost-effectively is difficult. Many providers have found that deploying AMD EPYC CPUs and Instinct GPUs can help them overcome those challenges. Here's a quick look at three service providers who are using AMD chips to accelerate AI advancements.




Flexible, Cost-Effective Performance at Microsoft Azure
Microsoft offers an extensive line of AMD-powered Azure Virtual Machines (VMs) as part of its lineup of cloud computing services. They've tailored the specs of these services to meet a wide variety of different use cases, ranging from general purpose to memory-intensive and storage-optimized to high-performance computing and confidential computing to AI.



Azure customers appreciate the cost-effectiveness of these VMs. For example, Henrik Klemola, Director, Cloud COE at Epicor: "we have been using Azure Virtual Machines featuring the AMD EPYC processor for the past two years to run a number of business-critical applications as part of the Epicor solution portfolio. We have been very pleased with the consistent performance and compelling price-performance that these VMs have been able to deliver. We are looking forward to continuing to benefit from the innovation that Microsoft Azure will make available to us, including the ability to access cost-effective Azure services that are based on the latest AMD EPYC processors."

Those new innovations include accelerators that support AI and other intensive workloads, like financial analysis, design, and engineering applications. Microsoft explains: "traditional CPU-only VMs often struggle to keep up with these apps, frustrating users and reducing productivity, but until now, deploying GPU-accelerated, on-premises virtual environments has been too costly." By making the latest AMD advances available in the cloud, Microsoft is providing customers with the performance they need at a price point they can afford.

Scalable Growth at Oracle Cloud Infrastructure
Another major cloud provider using AMD to push the limits of what's possible in AI is Oracle. In September, Oracle Cloud Infrastructure (OCI) chose AMD Instinct accelerators to power its newest OCI Compute Supercluster instance. Oracle explains that these instances allow customers to: "run the most demanding AI workloads faster, including generative AI, computer vision, and predictive analytics." They enable the deployment of massive scale-out clusters for training large-scale AI models.



That scalability has been very attractive to Uber, which began migrating to OCI Compute with AMD and OCI AI infrastructure in 2023. "As we continue to grow and enter new markets, we need the flexibility to leverage a wide range of cloud services to help ensure we're providing the best possible customer experience," says Kamran Zargahi, Senior Director of Tech Strategy and Cloud Engineering at Uber. "Collaborating with Oracle has allowed us to innovate faster while managing our infrastructure costs. With OCI, our products can run on best-of-breed infrastructure that is designed to support multi-cloud environments and can scale to support profitable growth."

Next-Generation Deep Learning at Meta
Meta is also heavily investing in cloud data centers to support the AI services it offers users. More than three billion people interact with Meta services like Facebook, Instagram, and WhatsApp every day. And Meta is working to quickly integrate generative AI into all those applications. To accomplish that goal, Meta has invested heavily in AMD technology. In fact, it has deployed more than 1.5 million AMD EPYC CPUs in its servers around the world. AMD Instinct GPUs have also been a part of its success. Kevin Salvadori, VP, Infrastructure and Engineering at Meta, explains: "All Meta live traffic has been served using MI300X exclusively due to its large memory capacity and TCO advantage."

In addition, AMD chips play a central role in Meta's Open Hardware vision. Meta notes in a blog post: "Scaling AI at this speed requires open hardware solutions. Developing new architectures, network fabrics, and system designs is the most efficient and impactful when we can build it on principles of openness. By investing in open hardware, we unlock AI's full potential and propel ongoing innovation in the field."

Grand Teton, Meta's next-gen open AI platform supports the AMD Instinct MI300X Platform. That enables Grand Teton to run Meta's deep learning recommendation models, content understanding, and other memory-intensive workloads.

Expand the limits of what's possible
If your team is interested in pushing the limits of what's possible with today's AI innovations, try an AMD Instinct-based cloud instance from one of the public cloud vendors. Or build AMD Instinct GPUs into your own infrastructure. They provide the performance, low total cost of ownership and ease of adoption to supercharge AI.

View at TechPowerUp Main Site | Source
 
Joined
Jan 3, 2021
Messages
3,758 (2.53/day)
Location
Slovenia
Processor i5-6600K
Motherboard Asus Z170A
Cooling some cheap Cooler Master Hyper 103 or similar
Memory 16GB DDR4-2400
Video Card(s) IGP
Storage Samsung 850 EVO 250GB
Display(s) 2x Oldell 24" 1920x1200
Case Bitfenix Nova white windowless non-mesh
Audio Device(s) E-mu 1212m PCI
Power Supply Seasonic G-360
Mouse Logitech Marble trackball, never had a mouse
Keyboard Key Tronic KT2000, no Win key because 1994
Software Oldwin
AMD Announces AIPYC APUs And AInstinct APUs Are Accelerating AI Advancements. Aye!
 
Joined
Feb 11, 2009
Messages
5,652 (0.97/day)
System Name Cyberline
Processor Intel Core i7 2600k -> 12600k
Motherboard Asus P8P67 LE Rev 3.0 -> Gigabyte Z690 Auros Elite DDR4
Cooling Tuniq Tower 120 -> Custom Watercoolingloop
Memory Corsair (4x2) 8gb 1600mhz -> Crucial (8x2) 16gb 3600mhz
Video Card(s) AMD RX480 -> RX7800XT
Storage Samsung 750 Evo 250gb SSD + WD 1tb x 2 + WD 2tb -> 2tb MVMe SSD
Display(s) Philips 32inch LPF5605H (television) -> Dell S3220DGF
Case antec 600 -> Thermaltake Tenor HTCP case
Audio Device(s) Focusrite 2i4 (USB)
Power Supply Seasonic 620watt 80+ Platinum
Mouse Elecom EX-G
Keyboard Rapoo V700
Software Windows 10 Pro 64bit
Joined
Mar 21, 2016
Messages
2,544 (0.79/day)
I think many people interested in AI would prefer to run the data locally instead of that data being scraped. Open is great, but not everyone wants to give away their hard work at the same time to the next copy cat ready to hijack their hard efforts and training they've invested in the form of time.
 
Joined
May 13, 2010
Messages
6,109 (1.14/day)
System Name RemixedBeast-NX
Processor Intel Xeon E5-2690 @ 2.9Ghz (8C/16T)
Motherboard Dell Inc. 08HPGT (CPU 1)
Cooling Dell Standard
Memory 24GB ECC
Video Card(s) Gigabyte Nvidia RTX2060 6GB
Storage 2TB Samsung 860 EVO SSD//2TB WD Black HDD
Display(s) Samsung SyncMaster P2350 23in @ 1920x1080 + Dell E2013H 20 in @1600x900
Case Dell Precision T3600 Chassis
Audio Device(s) Beyerdynamic DT770 Pro 80 // Fiio E7 Amp/DAC
Power Supply 630w Dell T3600 PSU
Mouse Logitech G700s/G502
Keyboard Logitech K740
VR HMD Linktr.ee/remixedcat // for my music ♡♡
Software Linux Mint 20
Benchmark Scores Network: APs: Ubiquiti Unifi AP-AC-LR and Lite Router/Sw:Meraki MX64 MS220-8P
More DeepSeek fallout press releases
prepare for tons of those. thing is I think they are lying about the cost just like fishermen do about their.... catch...
 
Joined
Dec 30, 2010
Messages
2,216 (0.43/day)
I think many people interested in AI would prefer to run the data locally instead of that data being scraped. Open is great, but not everyone wants to give away their hard work at the same time to the next copy cat ready to hijack their hard efforts and training they've invested in the form of time.

That's why building your own platform is desired in this one.
 

Toothless

Tech, Games, and TPU!
Supporter
Joined
Mar 26, 2014
Messages
9,727 (2.45/day)
Location
Washington, USA
System Name Veral
Processor 7800x3D
Motherboard x670e Asus Crosshair Hero
Cooling Thermalright Phantom Spirit 120 EVO
Memory 2x24 Klevv Cras V RGB
Video Card(s) Powercolor 7900XTX Red Devil
Storage Crucial P5 Plus 1TB, Samsung 980 1TB, Teamgroup MP34 4TB
Display(s) Acer Nitro XZ342CK Pbmiiphx, 2x AOC 2425W, AOC I1601FWUX
Case Fractal Design Meshify Lite 2
Audio Device(s) Blue Yeti + SteelSeries Arctis 5 / Samsung HW-T550
Power Supply Corsair HX850
Mouse Corsair Harpoon
Keyboard Corsair K55
VR HMD HP Reverb G2
Software Windows 11 Professional
Benchmark Scores PEBCAK
I think many people interested in AI would prefer to run the data locally instead of that data being scraped. Open is great, but not everyone wants to give away their hard work at the same time to the next copy cat ready to hijack their hard efforts and training they've invested in the form of time.
Some people want AI waifus and don't mind sharing the degenerate status with others.
 
Joined
Jul 5, 2013
Messages
28,859 (6.83/day)
To the above I'd like to add the following for some perspective.
And that was done with an RPi and RX7700.
As much trouble as we might have with the Chinese gov, the people themselves are as much a treasure as any other innovators in the world.
 
Last edited:
Joined
May 13, 2010
Messages
6,109 (1.14/day)
System Name RemixedBeast-NX
Processor Intel Xeon E5-2690 @ 2.9Ghz (8C/16T)
Motherboard Dell Inc. 08HPGT (CPU 1)
Cooling Dell Standard
Memory 24GB ECC
Video Card(s) Gigabyte Nvidia RTX2060 6GB
Storage 2TB Samsung 860 EVO SSD//2TB WD Black HDD
Display(s) Samsung SyncMaster P2350 23in @ 1920x1080 + Dell E2013H 20 in @1600x900
Case Dell Precision T3600 Chassis
Audio Device(s) Beyerdynamic DT770 Pro 80 // Fiio E7 Amp/DAC
Power Supply 630w Dell T3600 PSU
Mouse Logitech G700s/G502
Keyboard Logitech K740
VR HMD Linktr.ee/remixedcat // for my music ♡♡
Software Linux Mint 20
Benchmark Scores Network: APs: Ubiquiti Unifi AP-AC-LR and Lite Router/Sw:Meraki MX64 MS220-8P
To the above I'd like to add the following for some perspective.
And that was done with an RPi and RX7700.
As much trouble as we might have with the Chinese gov, the people themselves are as much a treasure and any other innovators in the world.
Cuz their schools focus on stem and the American schools focus on useless trivia
 
Joined
Oct 22, 2014
Messages
14,223 (3.79/day)
Location
Sunshine Coast
System Name H7 Flow 2024
Processor AMD 5800X3D
Motherboard Asus X570 Tough Gaming
Cooling Custom liquid
Memory 32 GB DDR4
Video Card(s) Intel ARC A750
Storage Crucial P5 Plus 2TB.
Display(s) AOC 24" Freesync 1m.s. 75Hz
Mouse Lenovo
Keyboard Eweadn Mechanical
Software W11 Pro 64 bit
Some people want AI waifus and don't mind sharing the degenerate status with others.
Only in tangible form, otherwise they can get nicked.
 
Joined
Mar 21, 2016
Messages
2,544 (0.79/day)
Protecting yourself and your assets is a pretty vital thing in life and extends to AI a good bit in turn. If you're thinking of developing a product you probably don't want it hijacked unless it was deliberately meant to be free in the first place like a cat meme until its not. AI Cat meme's $10 holla holla!
 
Top