News Posts matching #Milan

Return to Keyword Browsing

AMD Response to "ZENHAMMER: Rowhammer Attacks on AMD Zen-Based Platforms"

On February 26, 2024, AMD received new research related to an industry-wide DRAM issue documented in "ZENHAMMER: Rowhammering Attacks on AMD Zen-based Platforms" from researchers at ETH Zurich. The research demonstrates performing Rowhammer attacks on DDR4 and DDR5 memory using AMD "Zen" platforms. Given the history around Rowhammer, the researchers do not consider these rowhammering attacks to be a new issue.

Mitigation
AMD continues to assess the researchers' claim of demonstrating Rowhammer bit flips on a DDR5 device for the first time. AMD will provide an update upon completion of its assessment.

Global Server Shipments Expected to Increase by 2.05% in 2024, with AI Servers Accounting For Around 12.1%

TrendForce underscores that the primary momentum for server shipments this year remains with American CSPs. However, due to persistently high inflation and elevated corporate financing costs curtailing capital expenditures, overall demand has not yet returned to pre-pandemic growth levels. Global server shipments are estimated to reach approximately. 13.654 million units in 2024, an increase of about 2.05% YoY. Meanwhile, the market continues to focus on the deployment of AI servers, with their shipment share estimated at around 12.1%.

Foxconn is expected to see the highest growth rate, with an estimated annual increase of about 5-7%. This growth includes significant orders such as Dell's 16G platform, AWS Graviton 3 and 4, Google Genoa, and Microsoft Gen9. In terms of AI server orders, Foxconn has made notable inroads with Oracle and has also secured some AWS ASIC orders.

AMD EPYC CPUs Affected by CacheWarp Vulnerability, Patches are Already Available

Researchers at Graz University of Technology and the Helmholtz Center for Information Security have released their paper on CacheWarp—the latest vulnerability affecting some of the prior generation AMD EPYC CPUs. Titled CVE-2023-20592, the exploit targets first-generation EPYC Naples, second-generation EPYC Rome, and third-generation EPYC Milan. CacheWarp operates by exploiting a vulnerability in AMD's Secure Encrypted Virtualization (SEV) technology, specifically targeting the SEV-ES (Encrypted State) and SEV-SNP (Secure Nested Paging) versions. The attack is a software-based fault injection technique that manipulates the cache memory of a virtual machine (VM) running under SEV. It cleverly forces modified cache lines of the guest VM to revert to their previous state. This action circumvents the integrity checks that SEV-SNP is designed to enforce, allowing the attacker to inject faults without being detected.

Unlike attacks that rely on specific guest VM vulnerabilities, CacheWarp is more versatile and dangerous because it does not depend on the characteristics of the targeted VM. It exploits the underlying architectural weaknesses of AMD SEV, making it a broad threat to systems relying on this technology for security. The CacheWarp attack can bypass robust security measures like encrypted virtualization, posing a significant risk to data confidentiality and integrity in secure computing environments. AMD has issued an update for EPYC Milan with a hot-loadable microcode patch and updated the firmware image without any expected performance degradation. And for the remaining generations, AMD states that no mitigation is available for the first or second generations of EPYC processor (Naples and Rome) since the SEV and SEV-ES features are not designed to protect guest VM memory integrity, and the SEV-SNP is not available.

AMD Adds Six SKUs to 3rd Gen EPYC 7003 "Milan" Lineup

AMD has updated its third generation EPYC 7003 server-grade CPU lineup with six additional SKUs—new product pages were discovered and disclosed by momomo_us—yesterday's social media post reveals that these Zen 3 chips were launched with zero fanfare on September 5. Team Red did reveal, back in July, that: "SAP has chosen AMD EPYC processor-powered Google Cloud N2D virtual machines," but no new products were alluded to at the time. The fresh-ish sextet is comprised of EPYC 7663P, 7643P, 7303P, 7303, 7203P, and 7203 models. Evidently there seems to be some demand out there for 2021 processor technology, and AMD has acted on client feedback.

These appear to be wallet friendly options—perhaps targeting a customer base that cannot stretch their budget to fourth generation EPYC 9004 "Genoa" options. Tom's Hardware observed: "The EPYC 7663P and EPYC 7643P are fundamentally the single-socket versions of the EPYC 7663 and EPYC 7643, retaining the exact specifications. The bright side is that EPYC 7663P and EPYC 7643P have MSRPs around half of their regular counterparts. The SKUs represent a significant saving for companies with no plans to use a dual-socket configuration." Team Red has dipped back into Zen 3 a couple of times this year—at the mainstream desktop/gaming level. The EPYC 7003 series received a 3D V-Cache refresh last year, going under the "Milan-X" moniker—but the fancier tech added a fair chunk to MSRP.

AMD EPYC 7003 Series CPUs Announced as Powering SAP Applications

Today, AMD announced that SAP has chosen AMD EPYC processor-powered Google Cloud N2D virtual machines (VMs) to run its cloud ERP delivery operations for RISE with SAP; further increasing adoption of AMD EPYC for cloud-based workloads. As enterprises look toward digital modernization, many are adopting cloud-first architectures to complement their on-premises data centers. AMD, Google Cloud and SAP can help customers achieve their most stringent performance goals while delivering on energy efficiency, scalability and resource utilization needs.

AMD EPYC processors offer exceptional performance as well as robust security features, and energy efficient solutions for enterprise workloads in the cloud. RISE with SAP helps maximize customer investments in cloud infrastructure and, paired with AMD EPYC processors and Google Cloud N2D VMs, aims to modernize customer data centers and transform data into actionable insights, faster. "AMD powers some of the most performant and energy efficient cloud instances available in the world today," said Dan McNamara, senior vice president and general manager, Server Business Unit, AMD. "As part of our engagement with Google Cloud and SAP, SAP has selected AMD EPYC CPU-powered N2D instances to host its Business Suite enterprise software workloads. This decision by SAP delivers the performance and performance-per-dollar of EPYC processors to customers looking to modernize their data centers and streamline IT spending by accelerating time to value on their enterprise applications."

AMD EPYC Genoa-X Processor Spotted with 1248 MBs of 3D V-Cache

AMD's EPYC lineup already features the new Zen 4 core designed for better performance and efficiency. However, since the release of EPYC Milan-X processors with 3D V-cache integrated into server offerings, we wondered if AMD will continue to make such SKUs for upcoming generations. According to the report from Wccftech, we have a leaked table of specifications that showcase what some seemingly top-end Genoa-X SKUs will look like. The two SKUs listed here are the "100-000000892-04" coded engineering sample and the "100-000000892-06" coded retail sample. With support for the same SP5 platform, these CPUs should be easily integrated with the existing offerings from OEM.

As far as specifications, this processor features 384 MBs of L3 cache coming from CCDs, 768 MBs of L3 cache from the 3D V-Cache stacks, and 96 MBs of L2 cache for a total of 1248 MBs in the usable cache. A 3 MB stack of L1 cache is also dedicated to instructions and primary CPU data. Compared to the regular Genoa design, this is a 260% increase in cache sizes, and compared to Milan-X, the Genoa-X design also progresses with 56% more cache. With a TDP of up to 400 Watts, configurable to 320 Watts, this CPU can boost up to 3.7 GHz. AMD EPYC Genoa-X CPUs are expected to hit the shelves in the middle of 2023.

"Navi 31" RDNA3 Sees AMD Double Down on Chiplets: As Many as 7

Way back in January 2021, we heard a spectacular rumor about "Navi 31," the next-generation big GPU by AMD, being the company's first logic-MCM GPU (a GPU with more than one logic die). The company has a legacy of MCM GPUs, but those have been a single logic die surrounded by memory stacks. The RDNA3 graphics architecture that the "Navi 31" is based on, sees AMD fragment the logic die into smaller chiplets, with the goal of ensuring that only those specific components that benefit from the TSMC N5 node (6 nm), such as the number crunching machinery, are built on the node, while ancillary components, such as memory controllers, display controllers, or even media accelerators, are confined to chiplets built on an older node, such as the TSMC N6 (6 nm). AMD had taken this approach with its EPYC and Ryzen processors, where the chiplets with the CPU cores got the better node, and the other logic components got an older one.

Greymon55 predicts an interesting division of labor on the "Navi 31" MCM. Apparently, the number-crunching machinery is spread across two GCD (Graphics Complex Dies?). These dies pack the Shader Engines with their RDNA3 compute units (CU), Command Processor, Geometry Processor, Asynchronous Compute Engines (ACEs), Rendering Backends, etc. These are things that can benefit from the advanced 5 nm node, enabling AMD to the CUs at higher engine clocks. There's also sound logic behind building a big GPU with two such GCDs instead of a single large GCD, as smaller GPUs can be made with a single such GCD (exactly why we have two 8-core chiplets making up a 16-core Ryzen processors, and the one of these being used to create 8-core and 6-core SKUs). The smaller GCD would result in better yields per wafer, and minimize the need for separate wafer orders for a larger die (such as in the case of the Navi 21).

AMD EPYC Processors Hit by 22 Security Vulnerabilities, Patch is Already Out

AMD EPYC class of enterprise processors has gotten infected by as many as 22 different security vulnerabilities. These vulnerabilities range anywhere from medium to high severity, affecting all three generations of AMD EPYC processors. This includes AMD Naples, Rome, and Milan generations, where almost all three are concerned with the whole 22 exploits. There are a few exceptions, and you can find that on AMD's website. However, not all seems to be bad. AMD says that "During security reviews in collaboration with Google, Microsoft, and Oracle, potential vulnerabilities in the AMD Platform Security Processor (PSP), AMD System Management Unit (SMU), AMD Secure Encrypted Virtualization (SEV) and other platform components were discovered and have been mitigated in AMD EPYC AGESA PI packages."

AMD has already shipped new mitigations in the form of AGESA updates, and users should not fear if they keep their firmware up to date. If you or your organization is running on AMD EPYC processors, you should update the firmware to avoid any exploits from happening. The latest updates in question are NaplesPI-SP3_1.0.0.G, RomePI-SP3_1.0.0.C, and MilanPI-SP3_1.0.0.4 AGESA versions, which fix all of 22 security holes.

AMD Could Use Infinity Cache Branding for Chiplet 3D Vertical Cache

AMD in its Computex 2021 presentation showed off its upcoming "Zen 3" CCD (CPU complex dies) featuring 64 MB of "3D vertical cache" memory on top of the 32 MB L3 cache. The die-on-die stacked contraption, AMD claims, provides an up to 15% gaming performance uplift, as well as significant improvements for enterprise applications that can benefit from the 96 MB of total last-level cache per chiplet. Ahead of the its debut later today in the company's rumored EPYC "Milan-X" enterprise processor reveal, we're learning that AMD could brand 3D Vertical Cache as "3D Infinity Cache."

This came to light when Greymon55, a reliable source with AMD and NVIDIA leaks, used the term "3D IFC," and affirmed it to be "3D Infinity Cache." AMD realized that its GPUs and CPUs have a lot of untapped performance potential with use of large on-die caches that can make up for much of the hardware's memory-management optimization. The RDNA2 family of gaming GPUs feature up to 128 MB of on-die Infinite Cache memory operating at bandwidths as high as 16 Tbps, allowing AMD to stick to narrower 256-bit wide GDDR6 memory interfaces even on its highest-end RX 6900 XT graphics cards. For CCDs, this could mean added cushioning for data transfers between the CPU cores and the centralized memory controllers located in the sIOD (server I/O die) or cIOD (client I/O die in case of Ryzen parts).

Penetration Rate of Ice Lake CPUs in Server Market Expected to Surpass 30% by Year's End as x86 Architecture Remains Dominant, Says TrendForce

While the server industry transitions to the latest generation of processors based on the x86 platform, the Intel Ice Lake and AMD Milan CPUs entered mass production earlier this year and were shipped to certain customers, such as North American CSPs and telecommunication companies, at a low volume in 1Q21, according to TrendForce's latest investigations. These processors are expected to begin seeing widespread adoption in the server market in 3Q21. TrendForce believes that Ice Lake represents a step-up in computing performance from the previous generation due to its higher scalability and support for more memory channels. On the other hand, the new normal that emerged in the post-pandemic era is expected to drive clients in the server sector to partially migrate to the Ice Lake platform, whose share in the server market is expected to surpass 30% in 4Q21.

TrendForce: Enterprise SSD Contract Prices Likely to Increase by 15% QoQ for 3Q21 Due to High SSD Demand and Short Supply of Upstream IC Components

The ramp-up of the Intel Ice Lake and AMD Milan processors is expected to not only propel growths in server shipment for two consecutive quarters from 2Q21 to 3Q21, but also drive up the share of high-density products in North American hyperscalers' enterprise SSD purchases, according to TrendForce's latest investigations. In China, procurement activities by domestic hyperscalers Alibaba and ByteDance are expected to increase on a quarterly basis as well. With the labor force gradually returning to physical offices, enterprises are now placing an increasing number of IT equipment orders, including servers, compared to 1H21. Hence, global enterprise SSD procurement capacity is expected to increase by 7% QoQ in 3Q21. Ongoing shortages in foundry capacities, however, have led to the supply of SSD components lagging behind demand. At the same time, enterprise SSD suppliers are aggressively raising the share of large-density products in their offerings in an attempt to optimize their product lines' profitability. Taking account of these factors, TrendForce expects contract prices of enterprise SSDs to undergo a staggering 15% QoQ increase for 3Q21.

AMD MI200 "Aldebaran" Memory Size of 128GB Per Package Confirmed

The 128 GB per package memory size of AMD's upcoming Instinct MI200 HPC accelerator was confirmed, in a document released by Pawsey SuperComputing Centre, a Perth, Australia-based supercomputing firm that's popular with mineral prospecting companies located there. The company is currently working on Setonix, a 50-petaFLOP supercomputer being put together by HP Enterprise, which combines over 750 next-generation "Aldebaran" GPUs (referenced only as "AMD MI-Next GPUs"); and over 200,000 AMD EPYC "Milan" processor cores (the actual processor package count would be lower, and depend on the various core configs the builder is using).

The Pawsey document mentions 128 GB as the per-GPU memory. This corresponds with the rumored per-package memory of "Aldebaran." Recently imagined by Locuza_, an enthusiast who specializes in annotation of logic silicon dies, "Aldebaran" is a multi-chip module of two logic dies and eight HBM2E stacks. Each of the two logic dies, or chiplets, has 8,192 CDNA2 stream processors that add up to 16,384 on the package; and each of the two dies is wired to four HBM2E stacks over a 4096-bit memory bus. These are 128 Gbit (16 GB) stacks, so we have 64 GB memory per logic die, and 128 GB on the package. Find other drool worthy specs of the Pawsey Setonix in the screengrab below.

AMD "Milan-X" Processor Could Use Stacked Dies with X3D Packaging Technology

AMD is in a constant process of processor development, and there are always new technologies on the horizon. Back in March of 2020, the company has revealed that it is working on new X3D packaging technology, that integrated both 2.5D and 3D approaches to packing semiconductor dies together as tightly as possible. Today, we are finally getting some more information about the X3D technology, as we have the first codename of the processor that is featuring this advanced packaging technology. According to David Schor, we have learned that AMD is working on a CPU that uses X3D tech with stacked dies, and it is called Milan-X.

The Milan-X CPU is AMD's upcoming product designed for data center usage. The rumors suggest that the CPU is designed for heavy bandwidth and presumably a lot of computing power. According to ExecutableFix, the CPU uses a Genesis-IO die to power the connectivity, which is an IO die from EPYC Zen 3 processors. While this solution is in the works, we don't know the exact launch date of the processor. However, we could hear more about it in AMD's virtual keynote at Computex 2021. For now, take this rumor with a grain of salt.
AMD X3D Packaging Technology

AMD Announces 3rd Generation EPYC 7003 Enterprise Processors

AMD today announced its 3rd generation EPYC (7003 series) enterprise processors, codenamed "Milan." These processors combine up to 64 of the company's latest "Zen 3" CPU cores, with an updated I/O controller die, and promise significant performance uplifts and new security capabilities over the previous generation EPYC 7002 "Rome." The "Zen 3" CPU cores, AMD claims, introduce an IPC uplift of up to 19% over the previous generation, which when combined by generational increases in CPU clock speeds, bring about significant single-threaded performance increases. The processor also comes with large multi-threaded performance gains thanks to a redesigned CCD.

The new "Zen 3" CPU complex die (CCD) comes with a radical redesign in the arrangement of CPU cores, putting all eight CPU cores of the CCD in a single CCX, sharing a large 32 MB L3 cache. This the total amount of L3 cache addressable by a CPU core, and significantly reduces latencies for multi-threaded workloads. The "Milan" multi-chip module has up to eight such CCDs talking to a centralized server I/O controller die (sIOD) over the Infinity Fabric interconnect.

AMD to Launch 3rd Gen EPYC Processors on March 15

AMD today announced that its 3rd generation EPYC enterprise processors will launch on March 15, 2021. Codenamed "Milan," these processors are expected to leverage the company's latest "Zen 3" CPU microarchitecture to significantly increase IPC (single-threaded performance), and retain compatibility with the the SP3 socket. AMD set up a micro-site where it will stream the 3rd Gen EPYC processor launch event on March 15, at 11 ET (16:00 UTC). "Milan" is rumored to be AMD's final processor architecture on this socket, before transitioning to SP5 and the next-gen processor codenamed "Genoa," sometime in 2022. "Genoa" marks a switch to next-gen I/O such as DDR5 memory and PCIe gen 5.0, along with an increase in CPU core counts.

AMD Zen 4 Reportedly Features a 29% IPC Boost Over Zen 3

While AMD has only released a few Zen 3 processors which are still extremely hard to purchase for RRP we are already receiving leaks on their successors. Zen 3 Milan processors will likely be the final generation of AM4 processors before the switch to AM5. AMD appears to be preparing a bridging series of processors based on the Zen 3+ architecture before the release of Zen 4. Zen 3+ is expected to be AMD's first AM5 CPU design and should bring small IPC gains similar to the improvements from Zen to Zen+ in the range of 4% - 7%. The Zen 3+ processors will be manufactured on TSMC's refined N7 node with a potential announcement sometime later in 2021.

Zen 4 is expected to launch the next year in 2022 and will bring significant improvements potentially up to 40% over Zen 3 after clock boosts according to ChipsandChesse. A Zen 4 Genoa engineering sample reportedly performed 29% faster than an existing Zen 3 CPUs at the same clock speeds and core counts. The Zen 4 architecture will be manufactured on a 5 nm node and could potentially bring another core count increase. This would be one of the largest generational improvements for AMD since the launch of Ryzen if true. Take all this information with a heavy dose of skepticism as with any rumor.

AMD EPYC "Milan" Processors Pricing and Specifications Leak

AMD is readying its upcoming EPYC processors based on the refined Zen 3 core. Codenamed "Milan", the processor generation is supposed to bring the same number of PCIe lanes and quite possibly similar memory support. The pricing, along with the specifications, has been leaked and now we have information on every model ranging from eight cores to the whopping 64 cores. Thanks to @momomo_us on Twitter, we got ahold of Canadian pricing leaked on the Dell Canada website. Starting from the cheapest design listed here (many are missing here), you would be looking at the EPYC 7543 processor with 32 cores running at 2.8 GHz speed, 256 MB of L3 cache, and a TDP of 225 Watts. Such a processor will set you back as much as 2579.69 CAD, which is cheaper compared to the previous generation EPYC 7542 that costs 3214.70 CAD.

Whatever this represents more aggressive pricing to position itself better against the competition, we do not know. The same strategy is applied with the 64 core AMD EPYC 7763 processor (2.45 GHz speed, 256 MB cache, 280 W TDP) as the new Zen 3 based design is priced at 8069.69 CAD, which is cheaper than the 8180.10 CAD price tag of AMD EPYC 7762 CPU.

AMD 32-Core EPYC "Milan" Zen 3 CPU Fights Dual Xeon 28-Core Processors

AMD is expected to announce its upcoming EPYC lineup of processors for server applications based on the new Zen 3 architecture. Codenamed "Milan", AMD is continuing the use of Italian cities as codenames for its processors. Being based on the new Zen 3 core, Milan is expected to bring big improvements over the existing EPYC "Rome" design. Bringing a refined 7 nm+ process, the new EPYC Milan CPUs are going to feature better frequencies, which are getting paired with high core counts. If you are wondering how Zen 3 would look like in server configuration, look no further because we have the upcoming AMD EPYC 7543 32-core processor benchmarked in Geekbench 4 benchmark.

The new EPYC 7543 CPU is a 32 core, 64 thread design with a base clock of 2.8 GHz, and a boost frequency of 3.7 GHz. The caches on this CPU are big, and there is a total of 2048 KB (32 times 32 KB for instruction cache and 32 times 32 KB for data cache) of L1 cache, 16 MB of L2 cache, and as much as 256 MB of L3. In the GB4 test, a single-core test produced 6065 points, while the multi-core run resulted in 111379 points. If you are wondering how that fairs against something like top-end Intel Xeon Platinum 8280 Cascade Lake 28-core CPU, the new EPYC Milan 7543 CPU is capable of fighting two of them at the same time. In a single-core test, the Intel Xeon configuration scores 5048 points, showing that the new Milan CPU has 20% higher single-core performance, while the multi-core score of the dual Xeon setup is 117171 points, which is 5% faster than AMD CPU. The reason for the higher multi-core score is the sheer number of cores that a dual-CPU configuration offers (32 cores vs 56 cores).

128-Core 2P AMD EPYC "Milan" System Benchmarked in Cinebench R23, Outputs Insane Numbers

AMD is preparing to launch its next-generation of EPYC processors codenamed Millan. Being based on the company's latest Zen 3 cores, the new EPYC generation is going to deliver a massive IPC boost, spread across many cores. Models are supposed to range anywhere from 16 to 64 cores, to satisfy all of the demanding server workloads. Today, thanks to the leak from ExecutableFix on Twitter, we have the first benchmark of a system containing two of the 64 core, 128 thread Zen 3 based EPYC Milan processors. Running in the 2P configuration the processors achieved a maximum boost clock of 3.7 GHz, which is very high for a server CPU with that many cores.

The system was able to produce a Cinebench R23 score of insane 87878 points. With that many cores, it is no wonder how it is done, however, we need to look at how does it fare against the competition. For comparison, the Intel Xeon Platinum 8280L processor with its 28 cores and 56 threads that boost to 4.0 GHz can score up to 49,876 points. Of course, the scaling to that many cores may not work very well in this example application, so we have to wait and see how it performs in other workloads before jumping to any conclusions. The launch date is unknown for these processors, so we have to wait and report as more information appears.

AMD Zen 3-based EPYC Milan CPUs to Usher in 20% Performance Increase Compared to Rome

According to a report courtesy of Hardwareluxx, where contributor Andreas Schilling reportedly gained access to OEM documentation, AMD's upcoming EPYC Milan CPUs are bound to offer up to 20% performance improvements over the previous EPYC generation. The report claims a 15% IPC performance, paired with an extra 5% added via operating frequency optimization. The report claims that AMD's 64-core designs will feature a lower-clock all-core operating mode, and a 32-core alternate for less threaded workloads where extra frequency is added to the working cores.

Apparently, AMD's approach for the Zen 3 architecture does away with L3 subdivisions according to CCXs; now, a full 32 MB of L3 cache is available for each 8-core Core Compute Die (CCD). AMD has apparently achieved new levels of frequency optimization under Zen 3, with higher upward frequency limits than before. This will see the most benefits in lower core-count designs, as the amount of heat being generated is necessarily lesser compared to more core-dense designs. Milan keeps the same 7 nm manufacturing tech, DDR4, PCIe 4.0, and 120-225 W TDP as the previous-gen Rome. It remains to be seen how these changes actually translate to the consumer versions of Zen 3, Vermeer, later this year.

AMD 64-core EPYC "Milan" Based on "Zen 3" Could Ship with 3.00 GHz Clocks

AMD's 3rd generation EPYC line of enterprise processors that leverage the "Zen 3" microarchitecture, could innovate in two directions - towards increasing performance by doing away with the CCX (compute complex) multi-core topology; and taking advantage of a newer/refined 7 nm-class node to increase clock-speeds. Igor's Lab decoded as many as three OPNs of the upcoming 3rd gen EPYC series, including a 64-core/128-thread part that ships with frequency of 3.00 GHz. The top 2nd gen EPYC 64-core part, the 7662, ships with 2.00 GHz base frequency and 3.30 GHz boost; and 225 W TDP. AMD is expected to unveil its "Zen 3" microarchitecture within 2020.

Distant Blips on the AMD Roadmap Surface: Rembrandt and Raphael

Several future AMD processor codenames across various computing segments surfaced courtesy of an Expreview leak that's largely aligned with information from Komachi Ensaka. It does not account for "Matisse Refresh" that's allegedly coming out in June-July as three gaming-focused Ryzen socket AM4 desktop processors; but roadmap from 2H-2020 going up to 2022 sees many codenames surface. To begin with, the second half of 2020 promises to be as action packed as last year's 7/7 mega launch. Over in the graphics business, the company is expected to debut its DirectX 12 Ultimate-compliant RDNA2 client graphics, and its first CDNA architecture-based compute accelerators. Much of the processor launch cycle is based around the new "Zen 3" microarchitecture.

The server platform debuting in the second half of 2020 is codenamed "Genesis SP3." This will be the final processor architecture for the SP3-class enterprise sockets, as it has DDR4 and PCI-Express gen 4.0 I/O. The EPYC server processor is codenamed "Milan," and combines "Zen 3" chiplets along with an sIOD. EPYC Embedded (FP6 package) processors are codenamed "Grey Hawk."

NERSC Finalizes Contract for Perlmutter Supercomputer Powered by AMD Milan and NVIDIA Volta-Successor

The National Energy Research Scientific Computing Center (NERSC), the mission high-performance computing facility for the U.S. Department of Energy's Office of Science, has moved another step closer to making Perlmutter - its next-generation GPU-accelerated supercomputer - available to the science community in 2020.

In mid-April, NERSC finalized its contract with Cray - which was acquired by Hewlett Packard Enterprise (HPE) in September 2019 - for the new system, a Cray Shasta supercomputer that will feature 24 cabinets and provide 3-4 times the capability of NERSC's current supercomputer, Cori. Perlmutter will be deployed at NERSC in two phases: the first set of 12 cabinets, featuring GPU-accelerated nodes, will arrive in late 2020; the second set, featuring CPU-only nodes, will arrive in mid-2021. A 35-petabyte all-flash Lustre-based file system using HPE's ClusterStor E1000 hardware will also be deployed in late 2020.

AMD Confirms Zen 3 and RDNA2 by Late-2020

AMD in its post Q1-2020 earnings release disclosures stated that the company is "on track" to launching its next-generation "Zen 3" CPU microarchitecture and RDNA2 graphics architecture in late-2020. The company did not reveal in what shape or form the two will debut. AMD is readying "Zen 3" based EPYC "Milan" enterprise processors, "Vermeer" Ryzen desktop processors, and "Cezanne" Ryzen mobile APUs based on "Zen 3," although there's no word on which product line the microarchitecture will debut with. "Zen 3" compute dies (CCDs) are expected to do away with the quad-core compute complex (CCX) arrangement of cores, and are expected to be built on a refined 7 nm-class silicon fabrication process, either TSMC N7P or N7+.

The only confirmed RDNA2 based products we have as of now are the semi-custom SoCs that drive the Sony PlayStation 5 and Microsoft Xbox Series X next-generation consoles, which are expected to debut by late-2020. The AMD tweet, however, specifies "GPUs" (possibly referring to discrete GPUs). Also, with AMD forking its graphics IP to RDNA (for graphics processors) and CDNA (for headless compute accelerators), we're fairly sure AMD is referring to a Radeon RX or Radeon Pro launch in the tweet. Microsoft's announcement of the DirectX 12 Ultimate logo is expected to expedite launch of Radeon RX discrete GPUs based on RDNA2, as the current RDNA architecture doesn't meet the logo requirements.
Return to Keyword Browsing
Dec 18th, 2024 01:53 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts