News Posts matching #EPYC

Return to Keyword Browsing

AMD 7nm EPYC "Rome" CPUs in Upcoming Finnish Supercomputer, 200,000 Cores Total

During the next year and a half, the Finnish IT Center for Science (CSC) will be purchasing a new supercomputer in two phases. The first phase consists of Atos' air-cooled BullSequana X400 cluster which makes use of Intel's Cascade Lake Xeon processors along with Mellanox HDR InfiniBand for a theoretical performance of 2 petaflops. Meanwhile, system memory per node will range from 96 GB up to 1.5 TB with the entire system receiving a 4.9 PB Lustre parallel file system as well from DDN. Furthermore, a separate partition of phase one will be used for AI research and will feature 320 NVIDIA V100 NVLinked GPUs configured in 4-GPU nodes. It is expected that peak performance will reach 2.5 petaflops. Phase one will be brought online at some point in the summer of 2019.

Where things get interesting is in phase two, which is set for completion during the spring of 2020. Atos' will be building CSC a liquid-cooled HDR-connected BullSequana XH2000 supercomputer that will be configured with 200,000 AMD EPYC "Rome" CPU cores which for the mathematicians out there works out to 3,125 64 core AMD EPYC processors. Of course, all that x86 muscle will require a great deal of system memory, as such, each node will be equipped with 256 GB for good measure. Storage will consist of an 8 PB Lustre parallel file system that is to be provided by DDN. Overall phase two will increase computing capacity by 6.4 petaflops (peak). With deals like this already being signed it would appear AMD's next-generation EPYC processors are shaping up nicely considering Intel had this market cornered for nearly a decade.

AMD Doubles L3 Cache Per CCX with Zen 2 "Rome"

A SiSoft SANDRA results database entry for a 2P AMD "Rome" EPYC machine sheds light on the lower cache hierarchy. Each 64-core EPYC "Rome" processor is made up of eight 7 nm 8-core "Zen 2" CPU chiplets, which converge at a 14 nm I/O controller die, which handles memory and PCIe connectivity of the processor. The result mentions cache hierarchy, with 512 KB dedicated L2 cache per core, and "16 x 16 MB L3." Like CPU-Z, SANDRA has the ability to see L3 cache by arrangement. For the Ryzen 7 2700X, it reads the L3 cache as "2 x 8 MB L3," corresponding to the per-CCX L3 cache amount of 8 MB.

For each 64-core "Rome" processor, there are a total of 8 chiplets. With SANDRA detecting "16 x 16 MB L3" for 64-core "Rome," it becomes highly likely that each of the 8-core chiplets features two 16 MB L3 cache slices, and that its 8 cores are split into two quad-core CCX units with 16 MB L3 cache, each. This doubling in L3 cache per CCX could help the processors cushion data transfers between the chiplet and the I/O die better. This becomes particularly important since the I/O die controls memory with its monolithic 8-channel DDR4 memory controller.

Intel Could Upstage EPYC "Rome" Launch with "Cascade Lake" Before Year-end

Intel is reportedly working tirelessly to launch its "Cascade Lake" Xeon Scalable 48-core enterprise processor before year-end, according to a launch window timeline slide leaked by datacenter hardware provider QCT. The slide suggests a late-Q4 thru Q1-2019 launch timeline for the XCC (extreme core count) version of "Cascade Lake," which packs 48 CPU cores across two dies on an MCM. This launch is part of QCT's "early shipment program," which means select enterprise customers can obtain the hardware in pre-approved quantities. In other words, this is a limited launch, but one that's probably enough to upstage AMD's 7 nm EPYC "Rome" 64-core processor launch.

It's only by late-Q1 thru Q2-2019 that the Xeon "Cascade Lake" family would be substantially launched, including lower core-count variants that are still 2-die MCMs. This aligns to preempt or match AMD's 7 nm EPYC family rollout through 2019. "Cascade Lake" is probably Intel's final enterprise microarchitecture to be built on the 14 nm++ node, and consists of 2-die multi-chip modules that feature 48 cores, and a 12-channel memory interface (6-channel per die); with 88-lane PCIe from the CPU socket. The processor is capable of multi-socket configurations. It will also be Intel's launch platform for substantially launching its Optane Persistent Memory product series.

Stuttgart-based HLRS to Build a Supercomputer with 10,000 64-core Zen 2 Processors

Höchstleistungsrechenzentrum (HLRS, or High-Performance Computing Center), based in Stuttgart Germany, is building a new cluster supercomputer powered by 10,000 AMD Zen 2 "Rome" 64-core processors, making up 640,000 cores. Called "Hawk," the supercomputer will be HLRS' flagship product, and will open its doors to business in 2019. The slide-deck for Hawk makes a fascinating disclosure about the processors it's based on.

Apparently, each of the 64-core "Rome" EPYC processors has a guaranteed clock-speed of 2.35 GHz. This would mean at maximum load (with all cores loaded 100%), the processor can manage to run at 2.35 GHz. This is important, because the supercomputer's advertised throughput is calculated on this basis, and clients draw up SLAs on throughput. The advertised peak throughput for the whole system is 24.06 petaFLOP/s, although the company is yet to put out nominal/guaranteed performance numbers (which it will only after first-hand testing). The system features 665 TB of RAM, and 26,000 TB of storage.

AMD "Zen 2" IPC 29 Percent Higher than "Zen"

AMD reportedly put out its IPC (instructions per clock) performance guidance for its upcoming "Zen 2" micro-architecture in a version of its Next Horizon investor meeting, and the numbers are staggering. The next-generation CPU architecture provides a massive 29 percent IPC uplift over the original "Zen" architecture. While not developed for the enterprise segment, the stopgap "Zen+" architecture brought about 3-5 percent IPC uplifts over "Zen" on the backs of faster on-die caches and improved Precision Boost algorithms. "Zen 2" is being developed for the 7 nm silicon fabrication process, and on the "Rome" MCM, is part of the 8-core chiplets that aren't subdivided into CCX (8 cores per CCX).

According to Expreview, AMD conducted DKERN + RSA test for integer and floating point units, to arrive at a performance index of 4.53, compared to 3.5 of first-generation Zen, which is a 29.4 percent IPC uplift (loosely interchangeable with single-core performance). "Zen 2" goes a step beyond "Zen+," with its designers turning their attention to critical components that contribute significantly toward IPC - the core's front-end, and the number-crunching machinery, FPU. The front-end of "Zen" and "Zen+" cores are believed to be refinements of previous-generation architectures such as "Excavator." Zen 2 gets a brand-new front-end that's better optimized to distribute and collect workloads between the various on-die components of the core. The number-crunching machinery gets bolstered by 256-bit FPUs, and generally wider execution pipelines and windows. These come together yielding the IPC uplift. "Zen 2" will get its first commercial outing with AMD's 2nd generation EPYC "Rome" 64-core enterprise processors.

Update Nov 14: AMD has issued the following statement regarding these claims.
As we demonstrated at our Next Horizon event last week, our next-generation AMD EPYC server processor based on the new 'Zen 2' core delivers significant performance improvements as a result of both architectural advances and 7nm process technology. Some news media interpreted a 'Zen 2' comment in the press release footnotes to be a specific IPC uplift claim. The data in the footnote represented the performance improvement in a microbenchmark for a specific financial services workload which benefits from both integer and floating point performance improvements and is not intended to quantify the IPC increase a user should expect to see across a wide range of applications. We will provide additional details on 'Zen 2' IPC improvements, and more importantly how the combination of our next-generation architecture and advanced 7nm process technology deliver more performance per socket, when the products launch.

Intel Puts Out Additional "Cascade Lake" Performance Numbers

Intel late last week put out additional real-world HPC and AI compute performance numbers of its upcoming "Cascade Lake" 2x 48-core (96 cores in total) machine, compared to AMD's EPYC 7601 2x 32-core (64 cores in total) machine. You'll recall that on November 5th, the company put out Linpack, System Triad, and Deep Learning Inference numbers, which are all synthetic benchmarks. In a new set of slides, the company revealed a few real-world HPC/AI application performance numbers, including MIMD Lattice Computation (MILC), Weather Research and Forecasting (WRF), OpenFOAM, NAMD scalable molecular dynamics, and YaSK.

The Intel 96-core setup with 12-channel memory interface belts out up to 1.5X performance in MILC, up to 1.6X in WRF and OpenFOAM, up to 2.1X in NAMD, and up to 3.1X in YASK, compared to an AMD EPYC 7601 2P machine. The company also put out system configuration and disclaimer slides with the usual forward-looking CYA. "Cascake Lake" will be Intel's main competitor to AMD's EPYC "Rome" 64-core 4P-capable processor that comes out by the end of 2018. Intel's product is a multi-chip module of two 24~28 core dies, with a 2x 6-channel DDR4 memory interface.

Intel Announces Cascade Lake Advanced Performance and Xeon E-2100

Intel today announced two new members of its Intel Xeon processor portfolio: Cascade Lake advanced performance (expected to be released the first half of 2019) and the Intel Xeon E-2100 processor for entry-level servers (general availability today). These two new product families build upon Intel's foundation of 20 years of Intel Xeon platform leadership and give customers even more flexibility to pick the right solution for their needs.

"We remain highly focused on delivering a wide range of workload-optimized solutions that best meet our customers' system requirements. The addition of Cascade Lake advanced performance CPUs and Xeon E-2100 processors to our Intel Xeon processor lineup once again demonstrates our commitment to delivering performance-optimized solutions to a wide range of customers," said Lisa Spelman, Intel vice president and general manager of Intel Xeon products and data center marketing.

AMD Could Solve Memory Bottlenecks of its MCM CPUs by Disintegrating the Northbridge

AMD sprung back to competitiveness in the datacenter market with its EPYC enterprise processors, which are multi-chip modules of up to four 8-core dies. Each die has its own integrated northbridge, which controls 2-channel DDR4 memory, and a 32-lane PCI-Express gen 3.0 root complex. In applications that can not only utilize more cores, but also that are memory bandwidth intensive, this approach to non-localized memory presents design bottlenecks. The Ryzen Threadripper WX family highlights many of these bottlenecks, where video encoding benchmarks that are memory-intensive see performance drops as dies without direct access to I/O are starved of memory bandwidth. AMD's solution to this problem is by designing CPU dies with a disabled northbridge (the part of the die with memory controllers and PCIe root complex). This solution could be implemented in its upcoming 2nd generation EPYC processors, codenamed "Rome."

With its "Zen 2" generation, AMD could develop CPU dies in which the integrated northrbidge can be completely disabled (just like the "compute dies" on Threadripper WX processors, which don't have direct memory/PCIe access relying entirely on InfinityFabric). These dies talk to an external die called "System Controller" over a broader InfinityFabric interface. AMD's next-generation MCMs could see a centralized System Controller die that's surrounded by CPU dies, which could all be sitting on a silicon interposer, the same kind found on "Vega 10" and "Fiji" GPUs. An interposer is a silicon die that facilitates high-density microscopic wiring between dies in an MCM. These explosive speculative details and more were put out by Singapore-based @chiakokhua, aka The Retired Engineer, a retired VLSI engineer, who drew block diagrams himself.

AMD Zen 2 GNU Compiler Patch Published, Exposes New Instruction Sets

With a November deadline for feature freeze fast approaching, GNU toolchain developers are now adding the last feature additions to GCC 9.0 (GNU Compiler Collection). Ahead of that deadline, AMD has released their first basic patch adding the "znver2" target and therefore Zen 2 support to GCC. While the patch uses the same cost tables and scheduler data as Znver1, it does feature three new instructions that will be available to AMD's next-gen CPUs which include; Cache Line Write Back (CLWB), Read Processor ID (RDPID), and Write Back and Do Not Invalidate Cache (WBNOINVD).

These three instructions are the only ones that have been found thus far by digging through the current code. Taking into account this is the first patch it can be considered a jumping off point, making sure that the GCC 9.1 stable update, which comes out in 2019, has support for Zen 2. Further optimizations and instructions may be implemented in the future. This is likely since AMD has yet to update the scheduler cost tables and by extension means they may not want to reveal everything about Zen 2 just yet. You could say AMD is for now playing it safe, at least until their 7nm EPYC 2 processors launch in 2019.

AMD "Vega 20" GPU Not Before Late Q1-2019

AMD "Vega 20" is a new GPU based on existing "Vega" graphics architecture, which will be fabbed on the 7 nanometer silicon fabrication process, and bolstered with up to 32 GB of HBM2 memory across a 4096-bit memory interface that's double the bus-width of "Vega 10". AMD CEO Lisa Su already exhibited a mock-up of this chip at Computex 2018, with an word that alongside its "Zen 2" based EPYC enterprise processors, "Vega 20" will be the first 7 nm GPU. AMD could still make good on that word, only don't expect to find one under your tree this Holiday.

According to GamersNexus, the first "Vega 20" products won't launch before the turn of the year, and even in 2019, one can expect product launches till the end of Q1 (before April). GamersNexus cites reliable sources hinting at the later-than-expected arrival of "Vega 20" as part of refuting alleged "Final Fantasy XV" benchmarks of purported "Vega 20" engineering samples doing rounds on the web. Lisa Su stressed the importance of data-center GPUs in AMD's Q3-2018 earnings call, which could hint at the possibility of AMD allocating its first "Vega 20" yields to high-margin enterprise brands such as Radeon Pro and Radeon Instinct.

AMD and Oracle Collaborate to Provide AMD EPYC Processor-Based Offering in the Cloud

Today at Oracle OpenWorld 2018, AMD (NASDAQ: AMD) announced the availability of the first AMD EPYCTM processor-based instance on Oracle Cloud Infrastructure. With this announcement, Oracle becomes the largest public cloud provider to have a Bare Metal version on AMD EPYCTM processors1. The AMD EPYC processor-based "E" series will lead with the bare metal, Standard "E2", available immediately as the first instance type within the Series. At $0.03/Core hour, the AMD EPYC instance is up to 66 percent less on average per core than general purpose instances offered by the competition2 and is the most cost-effective instance available on any public cloud.

"With the launch of the AMD instance, Oracle has once again demonstrated that we are focused on getting the best value and performance to our customers," said Clay Magouyrk, senior vice president, software development, Oracle Cloud Infrastructure. "At greater than 269 GB/Sec, the AMD EPYC platform3, offers the highest memory bandwidth of any public cloud instance. Combined with increased performance, these cost advantages help customers maximize their IT dollars as they make the move to the cloud."

AMD Zen 2 Offers a 13% IPC Gain over Zen+, 16% over Zen 1

AMD "Zen" CPU architecture brought the company back to competitive relevance in the processor market. It got an incremental update in the form of "Zen+" which saw the implementation of an improved 12 nm process, and improved multi-core boosting algorithm, along with improvements to the cache subsystem. AMD is banking on Zen 2 to not only add IPC (instructions per clock) improvements; but also a new round of core-count increases. Bits n Chips has information that Zen 2 is making significant IPC gains.

According to the Italian tech publication, we could expect Zen 2 IPC gains of 13 percent over Zen+, which in turn posted 2-5% IPC gains over the original Zen. Bits n Chips notes that these IPC gains were tested in scientific tasks, and not in gaming. There is no gaming performance data at the moment. AMD is expected to debut Zen 2 with its 2nd generation EPYC enterprise processors by the end of the year, built on the 7 nm silicon fabrication process. This roughly 16 percent IPC gain versus the original Zen, coupled with higher clocks, and possibly more cores, could complete the value proposition of 2nd gen EPYC. Zen 2-based client-segment products can be expected only in 2019.

AMD and Xilinx Announce a New World Record for AI Inference

At today's Xilinx Developer Forum in San Jose, Calif., our CEO, Victor Peng was joined by the AMD CTO Mark Papermaster for a Guinness. But not the kind that comes in a pint - the kind that comes in a record book. The companies revealed the AMD and Xilinx have been jointly working to connect AMD EPYC CPUs and the new Xilinx Alveo line of acceleration cards for high-performance, real-time AI inference processing. To back it up, they revealed a world-record 30,000 images per-second inference throughput!

The impressive system, which will be featured in the Alveo ecosystem zone at XDF today, leverages two AMD EPYC 7551 server CPUs with its industry-leading PCIe connectivity, along with eight of the freshly-announced Xilinx Alveo U250 acceleration cards. The inference performance is powered by Xilinx ML Suite, which allows developers to optimize and deploy accelerated inference and supports numerous machine learning frameworks such as TensorFlow. The benchmark was performed on GoogLeNet, a widely used convolutional neural network.

AMD CEO Speaks with Jim Cramer About the "Secret Sauce" Behind its Giant-Killing Spree

Jim Cramer of CNBC Mad Money interviewed AMD CEO Dr. Lisa Su on the floor of the NYSE remarking her company as the year's biggest tech turnaround stories. The two spoke a variety of topics, including how the company went from a single-digit stock and a loss-making entity to one of the hottest tech-stocks, which threatens both Intel and NVIDIA. Dr. Su placed emphasis on taking long term strategic decisions that bear fruit years down the line.

"We decided to make the right investments. Technology is all about making the right choices, where we're going to invest, and where we're not going to invest...three or four years ago, it was mobile phones, tablets, and IoT that were the sexy things, and we were like 'hey we know that those are good markets, but those are not AMD.' We focused on what we thought the future would hold for us," said Dr. Su. "We are making decisions now that you won't see the outcome of for the next 3-5 years. We're making some good decisions," she added.

AMD Fast-tracks 7nm "Navi" GPU to Late-2018 Alongside "Zen 2" CPU

AMD is unique in the world of computing as the only company with both high-performance CPU and GPU products. For the past several years we have been executing our multi-generational leadership product and architectural roadmap. Just in the last 18 months, we successfully introduced and ramped our strongest set of products in more than a decade and our business has grown dramatically as we gained market share across the PC, gaming and datacenter markets.

The industry is at a significant inflection point as the pace of Moore's Law slows while the demand for computing and graphics performance continues to grow. This trend is fueling significant shifts throughout the industry and creating new opportunities for companies that can successfully bring together architectural, packaging, system and software innovations with leading-edge process technologies. That is why at AMD we have invested heavily in our architecture and product roadmaps, while also making the strategic decision to bet big on the 7nm process node. While it is still too early to provide more details on the architectural and product advances we have in store with our next wave of products, it is the right time to provide more detail on the flexible foundry sourcing strategy we put in place several years ago.

No 16-core AMD Ryzen AM4 Until After 7nm EPYC Launch (2019)

AMD in its Q2-2018 investors conference call dropped more hints at when it plans to launch its 3rd generation Ryzen processors, based on its "Zen2" architecture. CEO Lisa Su stated in the Q&A session that rollout of 7 nm Ryzen processors will only follow that of 7 nm EPYC (unlike 1st generation Ryzen preceding 1st generation EPYC). What this effectively means is that the fabled 16-core die with 8 cores per CCX won't make it to the desktop platform any time soon (at least not in the next three quarters, certainly not within 2018).

AMD CEO touched upon the development of the company's 7 nm "Rome" silicon, which will be at the heart of the company's 2nd generation EPYC processor family. 2nd generation EPYC, as you'd recall from our older article, is based on 7 nm "Zen2" architecture, and not 12 nm "Zen+." 3rd generation Ryzen is expected to be based on "Zen2." As of now, the company is said to have completed tape-out of "Rome," and is sending samples out to its industry partners for further testing and validation. The first EPYC products based on this will begin rolling out in 2019. The 7 nm process is also being used for a new "Vega" based GPU, which has taped out, and will see its first enterprise-segment product launch within 2018.

AMD EPYC Airport Ads Punch Close to the Belt

Airports are the latest battleground for AMD and Intel as the two vie to catch the attention of IT managers in the midst of an AI and big-data inflection point that promises to trigger a gold rush for enterprise processors. AMD took to San Jose International Airport with its latest AMD EPYC static ads targeted at IT managers stuck with Intel Xeon for its historic market leadership. AMD EPYC processors offer "more performance, more security, and more value" than Intel Xeon processors, the ads claim, but not before landing a mean punch in the general area of Intel's belt.

AMD Shares to Jump 25% in Wake of PC Market Growth: Stifel

Analyst Kevin Cassidy, responsible for AMD shares rating with Stifel, has revised expected AMD growth in the wake of expected (and already verified) PC market growth. Following the news at the end of last week, AMD shares jumped by 5% on Friday, and increased by another 2% on Monday. This marks an increase of 61.3% YTD (year-to-date.) Looking at this trend, the analyst increased his 12-month price target on the stock to $21 from $17, a nearly 27% upside from Monday close.

AMD Deepens Senior Management and Technical Leadership Bench

AMD today announced key promotions that extend senior-level focus on company growth. AMD named "Zen" chief architect Mike Clark an AMD corporate fellow; promoted Darren Grasby to senior vice president of global Computing and Graphics sales and AMD president for Europe, Middle East, and Africa (EMEA); and promoted Robert Gama to senior vice president and chief human resources officer.

"We believe the opportunities ahead of us are tremendous as we execute on our long-term strategy and exciting product roadmap," said Lisa Su, AMD president and CEO. "As leaders, Mike, Darren, and Robert have made significant contributions to our success so far, and these promotions elevate their impact at AMD as we accelerate company growth going forward."

GIGABYTE Expands AMD EPYC Family with New Density Optimized Server

GIGABYTE continues our active development of new AMD EPYC platforms with the release of the 2U 4 Node H261-Z60, the first AMD EPYC variant of our Density Optimized Server Series. The H261-Z60 combines 4 individual hot pluggable sliding node trays into a 2U server box. The node trays slide in and out easily from the rear of the unit.

EPYC Performance
Each node supports dual AMD EPYC 7000 series processors, with up to 32 cores, 64 threads and 8 channels of memory per CPU. Therefore, each node can feature up to 64 cores and 128 threads of compute power. Memory wise, each socket utilizes EPYC's 8 channels of memory with 1 x DIMM per channel / 8 x DIMMS per socket, for a total capacity of 16 x DIMMS per node (over 2TB of memory supported per each node ).

AMD to Polevault Zen+, Head Straight to 7nm Zen2 for EPYC

AMD in its Computex 2018 address earlier today, mention that its second-generation EPYC enterprise processors will be based on its 7 nanometer "Zen 2" architecture, and not 12 nm "Zen+." The company has the 7 nm silicon ready in its labs, and will begin sampling within the second half of 2018. The first products could launch in 2019, after validations. Besides improved energy-efficiency, the 12 nm "Zen+" architecture features a minor 3-5 percent IPC uplift thanks to improved multi-core clock-speed boosting, and faster caches. "Zen 2," on the other hand, presents AMD with the opportunity to make major design changes to its silicon to achieve higher IPC uplifts. The 7 nm process introduces significant transistor density uplifts over the current process. AMD is in the process of building 4-die multi-chip modules using the 12 nm "Pinnacle Ridge" silicon for its 2nd generation Ryzen Threadripper HEDT client processor family.

AMD EPYC Secure Encrypted Virtualization Not So Secure: Researchers

Secure Encrypted Virtualization (SEV) was touted as one of the killer features of AMD EPYC and Ryzen Pro series processors. It involves encryption of parts of the memory of the host machine which house virtual machines (or guests), with encryption keys stored on the processor, so the host has no scope of infiltrating or reading the contents of the guest's memory. This was designed to build trust in cloud-computing and shared hosting industries, so web-present small businesses with sensitive data could have some peace of mind and wouldn't have to spend big on dedicated hosting. A Germany-based IT security research team from Fraunhofer AISEC, thinks otherwise.

Using a technique called "SEVered," the researchers were able to use rogue host-level administrator, or malware within a hypervisor, to bypass SEV and copy decrypted information from the guest machine's memory. The exploit involves alteration of the guest machine's physical memory mappings using standard page tables, so SEV can't properly isolate and encrypt parts of the guest in the physical memory. The exploit is so brazen, that you could pull plaintext information out of compromised guests. The researchers published a paper on SEVered, along with technical details of the exploit.

Cray Debuts AMD EPYC Processors in Supercomputer Product Line

Global supercomputer leader Cray Inc. today announced it has added AMD EPYC processors to its Cray CS500 product line. To meet the growing needs of high-performance computing (HPC), the combination of AMD EPYC 7000 processors with the Cray CS500 cluster systems offers Cray customers a flexible, high-density system tuned for their demanding environments. The powerful platform lets organizations tackle a broad range of HPC workloads without the need to rebuild and recompile their x86 applications.

"Cray's decision to offer the AMD EPYC processors in the Cray CS500 product line expands its market opportunities by offering buyers an important new choice," said Steve Conway, senior vice president of research at Hyperion Research. "The AMD EPYC processors are expressly designed to provide highly scalable, energy- and cost-efficient performance in large and midrange clusters."

GIGABYTE Refreshes Their AMD EPYC Server Lineup

GIGABYTE, an industry leader in competitive, high performance server motherboards and systems, has refreshed its AMD EPYC 1U and 2U server line-up with a range updated options supporting different storage device combinations, with increased NVMe connectivity to integrate more dense, high bandwidth storage. These five systems are part of GIGABYTE's ready-to-integrate general purpose Rack Server family, equipped with the best power supplies and cooling fans and combining a high level of performance, energy efficiency and overall reliability for web hosting, mass storage, virtualized infrastructures, databases & analytics and other demanding applications.

GIGABYTE's AMD EPYC server systems are based on the 7000 series EPYC processor, offered as a SoC and incorporating a multi-die design with 32 cores per processor, 128 PCIe lanes and 8 channels of DDR4 memory. These features have allowed GIGABYTE to create a range of servers that pack a real punch in flexibility and expansion options. First released in July last year, adoption of GIGABYTE's AMD EPYC servers has been gaining momentum, lowering TCO for datacenters by offering an optional balance of compute, memory, I/O and security.

AMD Throws EPYC Jab at Intel Xeon Products on Cloudfest

Cloudfest is a summit of sorts, a running line of conferences and announcements that focus on the cloud side of computing. With the increasing market value and demand of cloud services and providers, it's no surprise that industry behemoths are in attendance. AMD is one such, and it took the opportunity to throw a slight jab at Intel. Making the best it can from its long-coming favorable position in the server market, AMD put up a banner with an EPYC pun, touting its 3.3x performance per dollar advantage versus the Xeon competition... and then some. Just take a look at the image for yourself. It's all in good sport... Right?
Return to Keyword Browsing
Mar 30th, 2025 23:30 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts